00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 606 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3268 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.088 Using shallow fetch with depth 1 00:00:00.088 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.088 > git --version # timeout=10 00:00:00.124 > git --version # 'git version 2.39.2' 00:00:00.124 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.159 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.159 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.792 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.803 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.813 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.813 > git config core.sparsecheckout # timeout=10 00:00:04.823 > git read-tree -mu HEAD # timeout=10 00:00:04.837 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.855 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.855 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.954 [Pipeline] Start of Pipeline 00:00:04.970 [Pipeline] library 00:00:04.972 Loading library shm_lib@master 00:00:04.972 Library shm_lib@master is cached. Copying from home. 00:00:04.990 [Pipeline] node 00:00:04.998 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.000 [Pipeline] { 00:00:05.011 [Pipeline] catchError 00:00:05.013 [Pipeline] { 00:00:05.028 [Pipeline] wrap 00:00:05.038 [Pipeline] { 00:00:05.048 [Pipeline] stage 00:00:05.051 [Pipeline] { (Prologue) 00:00:05.279 [Pipeline] sh 00:00:05.561 + logger -p user.info -t JENKINS-CI 00:00:05.580 [Pipeline] echo 00:00:05.582 Node: GP11 00:00:05.589 [Pipeline] sh 00:00:05.889 [Pipeline] setCustomBuildProperty 00:00:05.901 [Pipeline] echo 00:00:05.902 Cleanup processes 00:00:05.907 [Pipeline] sh 00:00:06.186 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.186 3355746 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.198 [Pipeline] sh 00:00:06.482 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.482 ++ grep -v 'sudo pgrep' 00:00:06.482 ++ awk '{print $1}' 00:00:06.482 + sudo kill -9 00:00:06.482 + true 00:00:06.499 [Pipeline] cleanWs 00:00:06.509 [WS-CLEANUP] Deleting project workspace... 00:00:06.509 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.517 [WS-CLEANUP] done 00:00:06.521 [Pipeline] setCustomBuildProperty 00:00:06.537 [Pipeline] sh 00:00:06.822 + sudo git config --global --replace-all safe.directory '*' 00:00:06.923 [Pipeline] httpRequest 00:00:06.968 [Pipeline] echo 00:00:06.970 Sorcerer 10.211.164.101 is alive 00:00:06.980 [Pipeline] httpRequest 00:00:06.986 HttpMethod: GET 00:00:06.987 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.987 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.990 Response Code: HTTP/1.1 200 OK 00:00:06.991 Success: Status code 200 is in the accepted range: 200,404 00:00:06.991 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.710 [Pipeline] sh 00:00:07.992 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.008 [Pipeline] httpRequest 00:00:08.023 [Pipeline] echo 00:00:08.025 Sorcerer 10.211.164.101 is alive 00:00:08.032 [Pipeline] httpRequest 00:00:08.037 HttpMethod: GET 00:00:08.037 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.038 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:08.053 Response Code: HTTP/1.1 200 OK 00:00:08.054 Success: Status code 200 is in the accepted range: 200,404 00:00:08.054 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:07.839 [Pipeline] sh 00:01:08.123 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:11.434 [Pipeline] sh 00:01:11.718 + git -C spdk log --oneline -n5 00:01:11.718 719d03c6a sock/uring: only register net impl if supported 00:01:11.718 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:11.718 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:11.718 6c7c1f57e accel: add sequence outstanding stat 00:01:11.718 3bc8e6a26 accel: add utility to put task 00:01:11.739 [Pipeline] withCredentials 00:01:11.750 > git --version # timeout=10 00:01:11.763 > git --version # 'git version 2.39.2' 00:01:11.781 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:11.784 [Pipeline] { 00:01:11.793 [Pipeline] retry 00:01:11.796 [Pipeline] { 00:01:11.814 [Pipeline] sh 00:01:12.109 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:14.666 [Pipeline] } 00:01:14.692 [Pipeline] // retry 00:01:14.700 [Pipeline] } 00:01:14.724 [Pipeline] // withCredentials 00:01:14.737 [Pipeline] httpRequest 00:01:14.758 [Pipeline] echo 00:01:14.760 Sorcerer 10.211.164.101 is alive 00:01:14.771 [Pipeline] httpRequest 00:01:14.776 HttpMethod: GET 00:01:14.777 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:14.778 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:14.781 Response Code: HTTP/1.1 200 OK 00:01:14.781 Success: Status code 200 is in the accepted range: 200,404 00:01:14.782 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.505 [Pipeline] sh 00:01:22.788 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.700 [Pipeline] sh 00:01:25.001 + git -C dpdk log --oneline -n5 00:01:25.002 eeb0605f11 version: 23.11.0 00:01:25.002 238778122a doc: update release notes for 23.11 00:01:25.002 46aa6b3cfc doc: fix description of RSS features 00:01:25.002 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:25.002 7e421ae345 devtools: support skipping forbid rule check 00:01:25.011 [Pipeline] } 00:01:25.026 [Pipeline] // stage 00:01:25.033 [Pipeline] stage 00:01:25.036 [Pipeline] { (Prepare) 00:01:25.062 [Pipeline] writeFile 00:01:25.081 [Pipeline] sh 00:01:25.374 + logger -p user.info -t JENKINS-CI 00:01:25.385 [Pipeline] sh 00:01:25.664 + logger -p user.info -t JENKINS-CI 00:01:25.675 [Pipeline] sh 00:01:25.956 + cat autorun-spdk.conf 00:01:25.957 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.957 SPDK_TEST_NVMF=1 00:01:25.957 SPDK_TEST_NVME_CLI=1 00:01:25.957 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.957 SPDK_TEST_NVMF_NICS=e810 00:01:25.957 SPDK_TEST_VFIOUSER=1 00:01:25.957 SPDK_RUN_UBSAN=1 00:01:25.957 NET_TYPE=phy 00:01:25.957 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.957 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.964 RUN_NIGHTLY=1 00:01:25.968 [Pipeline] readFile 00:01:25.995 [Pipeline] withEnv 00:01:25.997 [Pipeline] { 00:01:26.011 [Pipeline] sh 00:01:26.294 + set -ex 00:01:26.294 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:26.294 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:26.294 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.294 ++ SPDK_TEST_NVMF=1 00:01:26.294 ++ SPDK_TEST_NVME_CLI=1 00:01:26.294 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.294 ++ SPDK_TEST_NVMF_NICS=e810 00:01:26.294 ++ SPDK_TEST_VFIOUSER=1 00:01:26.294 ++ SPDK_RUN_UBSAN=1 00:01:26.294 ++ NET_TYPE=phy 00:01:26.294 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:26.294 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.294 ++ RUN_NIGHTLY=1 00:01:26.294 + case $SPDK_TEST_NVMF_NICS in 00:01:26.294 + DRIVERS=ice 00:01:26.294 + [[ tcp == \r\d\m\a ]] 00:01:26.294 + [[ -n ice ]] 00:01:26.294 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:26.294 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:26.294 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:26.294 rmmod: ERROR: Module irdma is not currently loaded 00:01:26.294 rmmod: ERROR: Module i40iw is not currently loaded 00:01:26.294 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:26.294 + true 00:01:26.294 + for D in $DRIVERS 00:01:26.294 + sudo modprobe ice 00:01:26.294 + exit 0 00:01:26.303 [Pipeline] } 00:01:26.325 [Pipeline] // withEnv 00:01:26.331 [Pipeline] } 00:01:26.351 [Pipeline] // stage 00:01:26.363 [Pipeline] catchError 00:01:26.365 [Pipeline] { 00:01:26.382 [Pipeline] timeout 00:01:26.382 Timeout set to expire in 50 min 00:01:26.384 [Pipeline] { 00:01:26.400 [Pipeline] stage 00:01:26.402 [Pipeline] { (Tests) 00:01:26.417 [Pipeline] sh 00:01:26.700 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.700 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.700 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.700 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:26.700 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:26.700 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.700 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:26.700 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.700 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:26.700 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:26.700 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:26.700 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:26.700 + source /etc/os-release 00:01:26.700 ++ NAME='Fedora Linux' 00:01:26.700 ++ VERSION='38 (Cloud Edition)' 00:01:26.700 ++ ID=fedora 00:01:26.700 ++ VERSION_ID=38 00:01:26.700 ++ VERSION_CODENAME= 00:01:26.700 ++ PLATFORM_ID=platform:f38 00:01:26.700 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.700 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.700 ++ LOGO=fedora-logo-icon 00:01:26.700 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.700 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.700 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.700 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.700 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.700 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.700 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.700 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.700 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.700 ++ SUPPORT_END=2024-05-14 00:01:26.700 ++ VARIANT='Cloud Edition' 00:01:26.700 ++ VARIANT_ID=cloud 00:01:26.700 + uname -a 00:01:26.700 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.700 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.636 Hugepages 00:01:27.636 node hugesize free / total 00:01:27.636 node0 1048576kB 0 / 0 00:01:27.636 node0 2048kB 0 / 0 00:01:27.636 node1 1048576kB 0 / 0 00:01:27.636 node1 2048kB 0 / 0 00:01:27.636 00:01:27.636 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.636 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:27.636 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:27.636 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:27.636 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:27.636 + rm -f /tmp/spdk-ld-path 00:01:27.636 + source autorun-spdk.conf 00:01:27.636 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.636 ++ SPDK_TEST_NVMF=1 00:01:27.636 ++ SPDK_TEST_NVME_CLI=1 00:01:27.636 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.636 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.636 ++ SPDK_TEST_VFIOUSER=1 00:01:27.636 ++ SPDK_RUN_UBSAN=1 00:01:27.636 ++ NET_TYPE=phy 00:01:27.636 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.636 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.636 ++ RUN_NIGHTLY=1 00:01:27.636 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.636 + [[ -n '' ]] 00:01:27.636 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.636 + for M in /var/spdk/build-*-manifest.txt 00:01:27.636 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.636 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.636 + for M in /var/spdk/build-*-manifest.txt 00:01:27.636 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.636 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:27.636 ++ uname 00:01:27.636 + [[ Linux == \L\i\n\u\x ]] 00:01:27.636 + sudo dmesg -T 00:01:27.896 + sudo dmesg --clear 00:01:27.896 + dmesg_pid=3357073 00:01:27.896 + [[ Fedora Linux == FreeBSD ]] 00:01:27.896 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.896 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.896 + sudo dmesg -Tw 00:01:27.896 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.896 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.896 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.896 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.896 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.896 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.896 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.896 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.896 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.896 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.896 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.896 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.896 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.896 Test configuration: 00:01:27.896 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.896 SPDK_TEST_NVMF=1 00:01:27.896 SPDK_TEST_NVME_CLI=1 00:01:27.896 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.896 SPDK_TEST_NVMF_NICS=e810 00:01:27.896 SPDK_TEST_VFIOUSER=1 00:01:27.896 SPDK_RUN_UBSAN=1 00:01:27.896 NET_TYPE=phy 00:01:27.896 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:27.896 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.896 RUN_NIGHTLY=1 18:33:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:27.897 18:33:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.897 18:33:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.897 18:33:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.897 18:33:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.897 18:33:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.897 18:33:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.897 18:33:15 -- paths/export.sh@5 -- $ export PATH 00:01:27.897 18:33:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.897 18:33:15 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:27.897 18:33:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:27.897 18:33:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720974795.XXXXXX 00:01:27.897 18:33:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720974795.tg7mPv 00:01:27.897 18:33:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:27.897 18:33:15 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:27.897 18:33:15 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.897 18:33:15 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:27.897 18:33:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:27.897 18:33:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.897 18:33:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:27.897 18:33:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:27.897 18:33:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.897 18:33:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:27.897 18:33:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:27.897 18:33:15 -- pm/common@17 -- $ local monitor 00:01:27.897 18:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.897 18:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.897 18:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.897 18:33:15 -- pm/common@21 -- $ date +%s 00:01:27.897 18:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.897 18:33:15 -- pm/common@21 -- $ date +%s 00:01:27.897 18:33:15 -- pm/common@25 -- $ sleep 1 00:01:27.897 18:33:15 -- pm/common@21 -- $ date +%s 00:01:27.897 18:33:15 -- pm/common@21 -- $ date +%s 00:01:27.897 18:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720974795 00:01:27.897 18:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720974795 00:01:27.897 18:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720974795 00:01:27.897 18:33:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720974795 00:01:27.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720974795_collect-vmstat.pm.log 00:01:27.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720974795_collect-cpu-load.pm.log 00:01:27.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720974795_collect-cpu-temp.pm.log 00:01:27.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720974795_collect-bmc-pm.bmc.pm.log 00:01:28.837 18:33:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:28.837 18:33:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.837 18:33:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.837 18:33:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.837 18:33:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.837 Sun Jul 14 04:33:16 PM UTC 2024 00:01:28.837 18:33:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.837 v24.09-pre-202-g719d03c6a 00:01:28.837 18:33:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.837 18:33:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.837 18:33:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.837 18:33:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.837 18:33:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.837 18:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.837 ************************************ 00:01:28.837 START TEST ubsan 00:01:28.837 ************************************ 00:01:28.837 18:33:17 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:28.837 using ubsan 00:01:28.837 00:01:28.837 real 0m0.000s 00:01:28.837 user 0m0.000s 00:01:28.837 sys 0m0.000s 00:01:28.837 18:33:17 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.837 18:33:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.837 ************************************ 00:01:28.837 END TEST ubsan 00:01:28.837 ************************************ 00:01:28.837 18:33:17 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.837 18:33:17 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:28.837 18:33:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:28.837 18:33:17 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:28.837 18:33:17 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:28.837 18:33:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.837 18:33:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.837 ************************************ 00:01:28.837 START TEST build_native_dpdk 00:01:28.837 ************************************ 00:01:28.837 18:33:17 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.837 18:33:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:29.096 eeb0605f11 version: 23.11.0 00:01:29.096 238778122a doc: update release notes for 23.11 00:01:29.096 46aa6b3cfc doc: fix description of RSS features 00:01:29.096 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:29.096 7e421ae345 devtools: support skipping forbid rule check 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:29.096 18:33:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:29.097 18:33:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:29.097 patching file config/rte_config.h 00:01:29.097 Hunk #1 succeeded at 60 (offset 1 line). 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:29.097 18:33:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:33.289 The Meson build system 00:01:33.289 Version: 1.3.1 00:01:33.289 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:33.289 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:33.289 Build type: native build 00:01:33.289 Program cat found: YES (/usr/bin/cat) 00:01:33.289 Project name: DPDK 00:01:33.289 Project version: 23.11.0 00:01:33.289 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.289 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:33.289 Host machine cpu family: x86_64 00:01:33.289 Host machine cpu: x86_64 00:01:33.289 Message: ## Building in Developer Mode ## 00:01:33.289 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.289 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:33.289 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.289 Program python3 found: YES (/usr/bin/python3) 00:01:33.289 Program cat found: YES (/usr/bin/cat) 00:01:33.289 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:33.289 Compiler for C supports arguments -march=native: YES 00:01:33.289 Checking for size of "void *" : 8 00:01:33.289 Checking for size of "void *" : 8 (cached) 00:01:33.289 Library m found: YES 00:01:33.289 Library numa found: YES 00:01:33.289 Has header "numaif.h" : YES 00:01:33.289 Library fdt found: NO 00:01:33.289 Library execinfo found: NO 00:01:33.290 Has header "execinfo.h" : YES 00:01:33.290 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.290 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.290 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.290 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.290 Run-time dependency openssl found: YES 3.0.9 00:01:33.290 Run-time dependency libpcap found: YES 1.10.4 00:01:33.290 Has header "pcap.h" with dependency libpcap: YES 00:01:33.290 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.290 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.290 Compiler for C supports arguments -Wformat: YES 00:01:33.290 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.290 Compiler for C supports arguments -Wformat-security: NO 00:01:33.290 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.290 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.290 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.290 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.290 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.290 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.290 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.290 Compiler for C supports arguments -Wundef: YES 00:01:33.290 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.290 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.290 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.290 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.290 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.290 Program objdump found: YES (/usr/bin/objdump) 00:01:33.290 Compiler for C supports arguments -mavx512f: YES 00:01:33.290 Checking if "AVX512 checking" compiles: YES 00:01:33.290 Fetching value of define "__SSE4_2__" : 1 00:01:33.290 Fetching value of define "__AES__" : 1 00:01:33.290 Fetching value of define "__AVX__" : 1 00:01:33.290 Fetching value of define "__AVX2__" : (undefined) 00:01:33.290 Fetching value of define "__AVX512BW__" : (undefined) 00:01:33.290 Fetching value of define "__AVX512CD__" : (undefined) 00:01:33.290 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:33.290 Fetching value of define "__AVX512F__" : (undefined) 00:01:33.290 Fetching value of define "__AVX512VL__" : (undefined) 00:01:33.290 Fetching value of define "__PCLMUL__" : 1 00:01:33.290 Fetching value of define "__RDRND__" : 1 00:01:33.290 Fetching value of define "__RDSEED__" : (undefined) 00:01:33.290 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.290 Fetching value of define "__znver1__" : (undefined) 00:01:33.290 Fetching value of define "__znver2__" : (undefined) 00:01:33.290 Fetching value of define "__znver3__" : (undefined) 00:01:33.290 Fetching value of define "__znver4__" : (undefined) 00:01:33.290 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.290 Message: lib/log: Defining dependency "log" 00:01:33.290 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.290 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.290 Checking for function "getentropy" : NO 00:01:33.290 Message: lib/eal: Defining dependency "eal" 00:01:33.290 Message: lib/ring: Defining dependency "ring" 00:01:33.290 Message: lib/rcu: Defining dependency "rcu" 00:01:33.290 Message: lib/mempool: Defining dependency "mempool" 00:01:33.290 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.290 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.290 Compiler for C supports arguments -mpclmul: YES 00:01:33.290 Compiler for C supports arguments -maes: YES 00:01:33.290 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.290 Compiler for C supports arguments -mavx512bw: YES 00:01:33.290 Compiler for C supports arguments -mavx512dq: YES 00:01:33.290 Compiler for C supports arguments -mavx512vl: YES 00:01:33.290 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.290 Compiler for C supports arguments -mavx2: YES 00:01:33.290 Compiler for C supports arguments -mavx: YES 00:01:33.290 Message: lib/net: Defining dependency "net" 00:01:33.290 Message: lib/meter: Defining dependency "meter" 00:01:33.290 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.290 Message: lib/pci: Defining dependency "pci" 00:01:33.290 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.290 Message: lib/metrics: Defining dependency "metrics" 00:01:33.290 Message: lib/hash: Defining dependency "hash" 00:01:33.290 Message: lib/timer: Defining dependency "timer" 00:01:33.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:33.290 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:33.290 Message: lib/acl: Defining dependency "acl" 00:01:33.290 Message: lib/bbdev: Defining dependency "bbdev" 00:01:33.290 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:33.290 Run-time dependency libelf found: YES 0.190 00:01:33.290 Message: lib/bpf: Defining dependency "bpf" 00:01:33.290 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:33.290 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.290 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.290 Message: lib/distributor: Defining dependency "distributor" 00:01:33.290 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.290 Message: lib/efd: Defining dependency "efd" 00:01:33.290 Message: lib/eventdev: Defining dependency "eventdev" 00:01:33.290 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:33.290 Message: lib/gpudev: Defining dependency "gpudev" 00:01:33.290 Message: lib/gro: Defining dependency "gro" 00:01:33.290 Message: lib/gso: Defining dependency "gso" 00:01:33.290 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:33.290 Message: lib/jobstats: Defining dependency "jobstats" 00:01:33.290 Message: lib/latencystats: Defining dependency "latencystats" 00:01:33.290 Message: lib/lpm: Defining dependency "lpm" 00:01:33.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:33.290 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:33.290 Message: lib/member: Defining dependency "member" 00:01:33.290 Message: lib/pcapng: Defining dependency "pcapng" 00:01:33.290 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.290 Message: lib/power: Defining dependency "power" 00:01:33.290 Message: lib/rawdev: Defining dependency "rawdev" 00:01:33.290 Message: lib/regexdev: Defining dependency "regexdev" 00:01:33.290 Message: lib/mldev: Defining dependency "mldev" 00:01:33.290 Message: lib/rib: Defining dependency "rib" 00:01:33.290 Message: lib/reorder: Defining dependency "reorder" 00:01:33.290 Message: lib/sched: Defining dependency "sched" 00:01:33.290 Message: lib/security: Defining dependency "security" 00:01:33.290 Message: lib/stack: Defining dependency "stack" 00:01:33.290 Has header "linux/userfaultfd.h" : YES 00:01:33.290 Has header "linux/vduse.h" : YES 00:01:33.290 Message: lib/vhost: Defining dependency "vhost" 00:01:33.290 Message: lib/ipsec: Defining dependency "ipsec" 00:01:33.290 Message: lib/pdcp: Defining dependency "pdcp" 00:01:33.290 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.290 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.290 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:33.290 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:33.290 Message: lib/fib: Defining dependency "fib" 00:01:33.290 Message: lib/port: Defining dependency "port" 00:01:33.290 Message: lib/pdump: Defining dependency "pdump" 00:01:33.290 Message: lib/table: Defining dependency "table" 00:01:33.290 Message: lib/pipeline: Defining dependency "pipeline" 00:01:33.290 Message: lib/graph: Defining dependency "graph" 00:01:33.290 Message: lib/node: Defining dependency "node" 00:01:34.674 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.674 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.674 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.674 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.674 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:34.674 Compiler for C supports arguments -Wno-unused-value: YES 00:01:34.674 Compiler for C supports arguments -Wno-format: YES 00:01:34.674 Compiler for C supports arguments -Wno-format-security: YES 00:01:34.674 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:34.674 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:34.674 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:34.674 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:34.674 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:34.674 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.674 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:34.674 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:34.674 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:34.674 Has header "sys/epoll.h" : YES 00:01:34.674 Program doxygen found: YES (/usr/bin/doxygen) 00:01:34.674 Configuring doxy-api-html.conf using configuration 00:01:34.674 Configuring doxy-api-man.conf using configuration 00:01:34.674 Program mandb found: YES (/usr/bin/mandb) 00:01:34.674 Program sphinx-build found: NO 00:01:34.674 Configuring rte_build_config.h using configuration 00:01:34.674 Message: 00:01:34.674 ================= 00:01:34.674 Applications Enabled 00:01:34.674 ================= 00:01:34.674 00:01:34.674 apps: 00:01:34.674 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:34.674 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:34.674 test-pmd, test-regex, test-sad, test-security-perf, 00:01:34.674 00:01:34.674 Message: 00:01:34.674 ================= 00:01:34.674 Libraries Enabled 00:01:34.674 ================= 00:01:34.674 00:01:34.674 libs: 00:01:34.674 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.674 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:34.674 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:34.674 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:34.674 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:34.674 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:34.674 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:34.674 00:01:34.674 00:01:34.674 Message: 00:01:34.674 =============== 00:01:34.674 Drivers Enabled 00:01:34.674 =============== 00:01:34.674 00:01:34.674 common: 00:01:34.674 00:01:34.674 bus: 00:01:34.674 pci, vdev, 00:01:34.674 mempool: 00:01:34.674 ring, 00:01:34.674 dma: 00:01:34.674 00:01:34.674 net: 00:01:34.674 i40e, 00:01:34.674 raw: 00:01:34.674 00:01:34.674 crypto: 00:01:34.674 00:01:34.674 compress: 00:01:34.674 00:01:34.674 regex: 00:01:34.674 00:01:34.674 ml: 00:01:34.674 00:01:34.674 vdpa: 00:01:34.674 00:01:34.674 event: 00:01:34.674 00:01:34.674 baseband: 00:01:34.674 00:01:34.674 gpu: 00:01:34.674 00:01:34.674 00:01:34.674 Message: 00:01:34.674 ================= 00:01:34.674 Content Skipped 00:01:34.674 ================= 00:01:34.674 00:01:34.674 apps: 00:01:34.674 00:01:34.674 libs: 00:01:34.674 00:01:34.674 drivers: 00:01:34.674 common/cpt: not in enabled drivers build config 00:01:34.674 common/dpaax: not in enabled drivers build config 00:01:34.674 common/iavf: not in enabled drivers build config 00:01:34.674 common/idpf: not in enabled drivers build config 00:01:34.674 common/mvep: not in enabled drivers build config 00:01:34.674 common/octeontx: not in enabled drivers build config 00:01:34.674 bus/auxiliary: not in enabled drivers build config 00:01:34.674 bus/cdx: not in enabled drivers build config 00:01:34.674 bus/dpaa: not in enabled drivers build config 00:01:34.674 bus/fslmc: not in enabled drivers build config 00:01:34.674 bus/ifpga: not in enabled drivers build config 00:01:34.674 bus/platform: not in enabled drivers build config 00:01:34.674 bus/vmbus: not in enabled drivers build config 00:01:34.674 common/cnxk: not in enabled drivers build config 00:01:34.674 common/mlx5: not in enabled drivers build config 00:01:34.674 common/nfp: not in enabled drivers build config 00:01:34.674 common/qat: not in enabled drivers build config 00:01:34.674 common/sfc_efx: not in enabled drivers build config 00:01:34.674 mempool/bucket: not in enabled drivers build config 00:01:34.674 mempool/cnxk: not in enabled drivers build config 00:01:34.674 mempool/dpaa: not in enabled drivers build config 00:01:34.674 mempool/dpaa2: not in enabled drivers build config 00:01:34.674 mempool/octeontx: not in enabled drivers build config 00:01:34.674 mempool/stack: not in enabled drivers build config 00:01:34.674 dma/cnxk: not in enabled drivers build config 00:01:34.674 dma/dpaa: not in enabled drivers build config 00:01:34.674 dma/dpaa2: not in enabled drivers build config 00:01:34.674 dma/hisilicon: not in enabled drivers build config 00:01:34.674 dma/idxd: not in enabled drivers build config 00:01:34.675 dma/ioat: not in enabled drivers build config 00:01:34.675 dma/skeleton: not in enabled drivers build config 00:01:34.675 net/af_packet: not in enabled drivers build config 00:01:34.675 net/af_xdp: not in enabled drivers build config 00:01:34.675 net/ark: not in enabled drivers build config 00:01:34.675 net/atlantic: not in enabled drivers build config 00:01:34.675 net/avp: not in enabled drivers build config 00:01:34.675 net/axgbe: not in enabled drivers build config 00:01:34.675 net/bnx2x: not in enabled drivers build config 00:01:34.675 net/bnxt: not in enabled drivers build config 00:01:34.675 net/bonding: not in enabled drivers build config 00:01:34.675 net/cnxk: not in enabled drivers build config 00:01:34.675 net/cpfl: not in enabled drivers build config 00:01:34.675 net/cxgbe: not in enabled drivers build config 00:01:34.675 net/dpaa: not in enabled drivers build config 00:01:34.675 net/dpaa2: not in enabled drivers build config 00:01:34.675 net/e1000: not in enabled drivers build config 00:01:34.675 net/ena: not in enabled drivers build config 00:01:34.675 net/enetc: not in enabled drivers build config 00:01:34.675 net/enetfec: not in enabled drivers build config 00:01:34.675 net/enic: not in enabled drivers build config 00:01:34.675 net/failsafe: not in enabled drivers build config 00:01:34.675 net/fm10k: not in enabled drivers build config 00:01:34.675 net/gve: not in enabled drivers build config 00:01:34.675 net/hinic: not in enabled drivers build config 00:01:34.675 net/hns3: not in enabled drivers build config 00:01:34.675 net/iavf: not in enabled drivers build config 00:01:34.675 net/ice: not in enabled drivers build config 00:01:34.675 net/idpf: not in enabled drivers build config 00:01:34.675 net/igc: not in enabled drivers build config 00:01:34.675 net/ionic: not in enabled drivers build config 00:01:34.675 net/ipn3ke: not in enabled drivers build config 00:01:34.675 net/ixgbe: not in enabled drivers build config 00:01:34.675 net/mana: not in enabled drivers build config 00:01:34.675 net/memif: not in enabled drivers build config 00:01:34.675 net/mlx4: not in enabled drivers build config 00:01:34.675 net/mlx5: not in enabled drivers build config 00:01:34.675 net/mvneta: not in enabled drivers build config 00:01:34.675 net/mvpp2: not in enabled drivers build config 00:01:34.675 net/netvsc: not in enabled drivers build config 00:01:34.675 net/nfb: not in enabled drivers build config 00:01:34.675 net/nfp: not in enabled drivers build config 00:01:34.675 net/ngbe: not in enabled drivers build config 00:01:34.675 net/null: not in enabled drivers build config 00:01:34.675 net/octeontx: not in enabled drivers build config 00:01:34.675 net/octeon_ep: not in enabled drivers build config 00:01:34.675 net/pcap: not in enabled drivers build config 00:01:34.675 net/pfe: not in enabled drivers build config 00:01:34.675 net/qede: not in enabled drivers build config 00:01:34.675 net/ring: not in enabled drivers build config 00:01:34.675 net/sfc: not in enabled drivers build config 00:01:34.675 net/softnic: not in enabled drivers build config 00:01:34.675 net/tap: not in enabled drivers build config 00:01:34.675 net/thunderx: not in enabled drivers build config 00:01:34.675 net/txgbe: not in enabled drivers build config 00:01:34.675 net/vdev_netvsc: not in enabled drivers build config 00:01:34.675 net/vhost: not in enabled drivers build config 00:01:34.675 net/virtio: not in enabled drivers build config 00:01:34.675 net/vmxnet3: not in enabled drivers build config 00:01:34.675 raw/cnxk_bphy: not in enabled drivers build config 00:01:34.675 raw/cnxk_gpio: not in enabled drivers build config 00:01:34.675 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:34.675 raw/ifpga: not in enabled drivers build config 00:01:34.675 raw/ntb: not in enabled drivers build config 00:01:34.675 raw/skeleton: not in enabled drivers build config 00:01:34.675 crypto/armv8: not in enabled drivers build config 00:01:34.675 crypto/bcmfs: not in enabled drivers build config 00:01:34.675 crypto/caam_jr: not in enabled drivers build config 00:01:34.675 crypto/ccp: not in enabled drivers build config 00:01:34.675 crypto/cnxk: not in enabled drivers build config 00:01:34.675 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.675 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.675 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.675 crypto/mlx5: not in enabled drivers build config 00:01:34.675 crypto/mvsam: not in enabled drivers build config 00:01:34.675 crypto/nitrox: not in enabled drivers build config 00:01:34.675 crypto/null: not in enabled drivers build config 00:01:34.675 crypto/octeontx: not in enabled drivers build config 00:01:34.675 crypto/openssl: not in enabled drivers build config 00:01:34.675 crypto/scheduler: not in enabled drivers build config 00:01:34.675 crypto/uadk: not in enabled drivers build config 00:01:34.675 crypto/virtio: not in enabled drivers build config 00:01:34.675 compress/isal: not in enabled drivers build config 00:01:34.675 compress/mlx5: not in enabled drivers build config 00:01:34.675 compress/octeontx: not in enabled drivers build config 00:01:34.675 compress/zlib: not in enabled drivers build config 00:01:34.675 regex/mlx5: not in enabled drivers build config 00:01:34.675 regex/cn9k: not in enabled drivers build config 00:01:34.675 ml/cnxk: not in enabled drivers build config 00:01:34.675 vdpa/ifc: not in enabled drivers build config 00:01:34.675 vdpa/mlx5: not in enabled drivers build config 00:01:34.675 vdpa/nfp: not in enabled drivers build config 00:01:34.675 vdpa/sfc: not in enabled drivers build config 00:01:34.675 event/cnxk: not in enabled drivers build config 00:01:34.675 event/dlb2: not in enabled drivers build config 00:01:34.675 event/dpaa: not in enabled drivers build config 00:01:34.675 event/dpaa2: not in enabled drivers build config 00:01:34.675 event/dsw: not in enabled drivers build config 00:01:34.675 event/opdl: not in enabled drivers build config 00:01:34.675 event/skeleton: not in enabled drivers build config 00:01:34.675 event/sw: not in enabled drivers build config 00:01:34.675 event/octeontx: not in enabled drivers build config 00:01:34.675 baseband/acc: not in enabled drivers build config 00:01:34.675 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:34.675 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:34.675 baseband/la12xx: not in enabled drivers build config 00:01:34.675 baseband/null: not in enabled drivers build config 00:01:34.675 baseband/turbo_sw: not in enabled drivers build config 00:01:34.675 gpu/cuda: not in enabled drivers build config 00:01:34.675 00:01:34.675 00:01:34.675 Build targets in project: 220 00:01:34.675 00:01:34.675 DPDK 23.11.0 00:01:34.675 00:01:34.675 User defined options 00:01:34.675 libdir : lib 00:01:34.675 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.675 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:34.675 c_link_args : 00:01:34.675 enable_docs : false 00:01:34.675 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:34.675 enable_kmods : false 00:01:34.675 machine : native 00:01:34.675 tests : false 00:01:34.675 00:01:34.675 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.675 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:34.675 18:33:22 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:34.675 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:34.675 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.675 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.675 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.675 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.675 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.675 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.936 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.936 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.936 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.936 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.936 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.936 [12/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.936 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.936 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.936 [15/710] Linking static target lib/librte_kvargs.a 00:01:34.936 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.936 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.936 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.936 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.936 [20/710] Linking static target lib/librte_log.a 00:01:35.197 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.197 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.776 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.776 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.776 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.776 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.776 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.776 [28/710] Linking target lib/librte_log.so.24.0 00:01:35.776 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.776 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.776 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.776 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.776 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.776 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.776 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.776 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.776 [37/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.776 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.039 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.039 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:36.039 [41/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.039 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.039 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.039 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.039 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:36.039 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.039 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.039 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.039 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.039 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.039 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.039 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.039 [53/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:36.039 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:36.039 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.039 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:36.039 [57/710] Linking target lib/librte_kvargs.so.24.0 00:01:36.039 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.039 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:36.039 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.300 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:36.300 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:36.300 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:36.300 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:36.300 [65/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:36.300 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:36.563 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:36.563 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.563 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.563 [70/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:36.563 [71/710] Linking static target lib/librte_pci.a 00:01:36.563 [72/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:36.826 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:36.826 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.826 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.826 [76/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.826 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.826 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.826 [79/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:36.826 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.092 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.092 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.092 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.092 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.092 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.092 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.092 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.092 [88/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.092 [89/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.093 [90/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.093 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.093 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.093 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.093 [94/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.093 [95/710] Linking static target lib/librte_ring.a 00:01:37.093 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.093 [97/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.093 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.362 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.362 [100/710] Linking static target lib/librte_meter.a 00:01:37.362 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.362 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.362 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.362 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.362 [105/710] Linking static target lib/librte_telemetry.a 00:01:37.362 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.362 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.362 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.362 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.362 [110/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.362 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.670 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.670 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.670 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.670 [115/710] Linking static target lib/librte_eal.a 00:01:37.670 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.670 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.670 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.670 [119/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.670 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.670 [121/710] Linking static target lib/librte_net.a 00:01:37.670 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.958 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.958 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.958 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.958 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.958 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.958 [128/710] Linking static target lib/librte_mempool.a 00:01:37.958 [129/710] Linking static target lib/librte_cmdline.a 00:01:37.958 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.958 [131/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.217 [132/710] Linking target lib/librte_telemetry.so.24.0 00:01:38.217 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.217 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:38.217 [135/710] Linking static target lib/librte_cfgfile.a 00:01:38.217 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.217 [137/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.217 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:38.217 [139/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.217 [140/710] Linking static target lib/librte_metrics.a 00:01:38.217 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:38.217 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:38.477 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.477 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:38.477 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:38.477 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.477 [147/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.477 [148/710] Linking static target lib/librte_bitratestats.a 00:01:38.477 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:38.477 [150/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.477 [151/710] Linking static target lib/librte_rcu.a 00:01:38.742 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:38.742 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.742 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.742 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:38.742 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:39.004 [157/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.004 [158/710] Linking static target lib/librte_timer.a 00:01:39.004 [159/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.004 [161/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [163/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:39.004 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.004 [165/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.004 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:39.268 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:39.268 [168/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:39.268 [169/710] Linking static target lib/librte_bbdev.a 00:01:39.268 [170/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.268 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.268 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:39.268 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.526 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.526 [175/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.526 [176/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.526 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.526 [178/710] Linking static target lib/librte_compressdev.a 00:01:39.526 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:39.526 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:39.789 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.789 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.789 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.789 [184/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:40.046 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:40.046 [186/710] Linking static target lib/librte_distributor.a 00:01:40.046 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.308 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:40.308 [189/710] Linking static target lib/librte_bpf.a 00:01:40.308 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.308 [191/710] Linking static target lib/librte_dmadev.a 00:01:40.308 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:40.308 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.308 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:40.308 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:40.308 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:40.308 [197/710] Linking static target lib/librte_dispatcher.a 00:01:40.567 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:40.567 [199/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.567 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:40.567 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:40.567 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:40.567 [203/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:40.567 [204/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:40.567 [205/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:40.567 [206/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:40.567 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.567 [208/710] Linking static target lib/librte_gpudev.a 00:01:40.567 [209/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.567 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:40.567 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:40.567 [212/710] Linking static target lib/librte_gro.a 00:01:40.567 [213/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.567 [214/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.830 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:40.830 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:40.830 [217/710] Linking static target lib/librte_jobstats.a 00:01:40.830 [218/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.830 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:41.097 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:41.097 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.097 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.097 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:41.357 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:41.357 [225/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:41.357 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:41.357 [227/710] Linking static target lib/librte_latencystats.a 00:01:41.357 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.357 [229/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:41.357 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:41.357 [231/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:41.357 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:41.357 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:41.618 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:41.618 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:41.618 [236/710] Linking static target lib/librte_ip_frag.a 00:01:41.618 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:41.618 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.884 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.884 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.884 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:41.884 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.884 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:41.884 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:42.143 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.143 [246/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.143 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:42.143 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:42.143 [249/710] Linking static target lib/librte_gso.a 00:01:42.143 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:42.143 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:42.143 [252/710] Linking static target lib/librte_regexdev.a 00:01:42.410 [253/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.410 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:42.410 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:42.410 [256/710] Linking static target lib/librte_rawdev.a 00:01:42.410 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:42.410 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:42.410 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.410 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.410 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:42.669 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:42.669 [263/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:42.669 [264/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.669 [265/710] Linking static target lib/librte_efd.a 00:01:42.669 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:42.669 [267/710] Linking static target lib/librte_mldev.a 00:01:42.669 [268/710] Linking static target lib/librte_pcapng.a 00:01:42.669 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.669 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.669 [271/710] Linking static target lib/librte_stack.a 00:01:42.669 [272/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:42.669 [273/710] Linking static target lib/acl/libavx2_tmp.a 00:01:42.932 [274/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.932 [275/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:42.932 [276/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:42.932 [277/710] Linking static target lib/librte_lpm.a 00:01:42.932 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.932 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.932 [280/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.932 [281/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.932 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.932 [283/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.932 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.932 [285/710] Linking static target lib/librte_hash.a 00:01:43.192 [286/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.192 [287/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:43.454 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:43.454 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:43.454 [290/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:43.454 [291/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:43.454 [292/710] Linking static target lib/acl/libavx512_tmp.a 00:01:43.454 [293/710] Linking static target lib/librte_reorder.a 00:01:43.454 [294/710] Linking static target lib/librte_acl.a 00:01:43.454 [295/710] Linking static target lib/librte_power.a 00:01:43.454 [296/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:43.454 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:43.454 [298/710] Linking static target lib/librte_security.a 00:01:43.454 [299/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.454 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.717 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:43.717 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.717 [303/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.717 [304/710] Linking static target lib/librte_mbuf.a 00:01:43.717 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:43.717 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:43.717 [307/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.717 [308/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:43.717 [309/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:43.717 [310/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:43.982 [311/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:43.982 [312/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.982 [313/710] Linking static target lib/librte_rib.a 00:01:43.982 [314/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.982 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:43.982 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:43.982 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:43.982 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.246 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:44.246 [320/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:44.246 [321/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:44.246 [322/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:44.246 [323/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:44.246 [324/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:44.246 [325/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:44.246 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.511 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:44.511 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.511 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.511 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.511 [331/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:44.773 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:44.773 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:44.773 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:45.040 [335/710] Linking static target lib/librte_eventdev.a 00:01:45.040 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:45.040 [337/710] Linking static target lib/librte_member.a 00:01:45.040 [338/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.040 [339/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:45.299 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:45.299 [341/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:45.299 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:45.299 [343/710] Linking static target lib/librte_cryptodev.a 00:01:45.299 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:45.299 [345/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:45.299 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:45.299 [347/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:45.299 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:45.558 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:45.558 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:45.558 [351/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:45.558 [352/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:45.558 [353/710] Linking static target lib/librte_ethdev.a 00:01:45.558 [354/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.558 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:45.558 [356/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:45.558 [357/710] Linking static target lib/librte_sched.a 00:01:45.558 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:45.558 [359/710] Linking static target lib/librte_fib.a 00:01:45.558 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:45.820 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:45.820 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:45.820 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:45.820 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:45.820 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.081 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.081 [367/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:46.081 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.081 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.081 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.081 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.347 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:46.347 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:46.347 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:46.347 [375/710] Linking static target lib/librte_pdump.a 00:01:46.613 [376/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:46.613 [377/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:46.613 [378/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:46.613 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:46.613 [380/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.613 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:46.613 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:46.613 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:46.613 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:46.613 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:46.875 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.875 [387/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:46.875 [388/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.875 [389/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:46.875 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:47.143 [391/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:47.143 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:47.143 [393/710] Linking static target lib/librte_ipsec.a 00:01:47.143 [394/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:47.143 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:47.143 [396/710] Linking static target lib/librte_table.a 00:01:47.143 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:47.404 [398/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.404 [399/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:47.404 [400/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:47.673 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.673 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:47.673 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:47.937 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.937 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:47.937 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:47.937 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:48.202 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.202 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.202 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.202 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.202 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.202 [413/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:48.202 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.464 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.464 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.464 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:48.464 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.464 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.464 [420/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.464 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.464 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.464 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:48.729 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:48.729 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:48.729 [426/710] Linking static target lib/librte_port.a 00:01:48.729 [427/710] Linking target lib/librte_eal.so.24.0 00:01:48.729 [428/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.729 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.991 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.991 [431/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:48.991 [432/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.991 [433/710] Linking target lib/librte_ring.so.24.0 00:01:48.991 [434/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:48.991 [435/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.991 [436/710] Linking target lib/librte_meter.so.24.0 00:01:48.991 [437/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:48.991 [438/710] Linking target lib/librte_pci.so.24.0 00:01:48.991 [439/710] Linking target lib/librte_timer.so.24.0 00:01:48.991 [440/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:49.254 [441/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:49.254 [442/710] Linking target lib/librte_cfgfile.so.24.0 00:01:49.254 [443/710] Linking target lib/librte_acl.so.24.0 00:01:49.254 [444/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:49.254 [445/710] Linking target lib/librte_dmadev.so.24.0 00:01:49.254 [446/710] Linking target lib/librte_jobstats.so.24.0 00:01:49.254 [447/710] Linking target lib/librte_rcu.so.24.0 00:01:49.254 [448/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:49.254 [449/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:49.254 [450/710] Linking target lib/librte_mempool.so.24.0 00:01:49.254 [451/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:49.254 [452/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:49.254 [453/710] Linking static target lib/librte_graph.a 00:01:49.254 [454/710] Linking static target drivers/librte_bus_pci.a 00:01:49.254 [455/710] Linking target lib/librte_rawdev.so.24.0 00:01:49.516 [456/710] Linking target lib/librte_stack.so.24.0 00:01:49.516 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.516 [458/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:49.516 [459/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.516 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.516 [461/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:49.516 [462/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:49.516 [463/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:49.516 [464/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:49.516 [465/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.516 [466/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:49.516 [467/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:49.516 [468/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:49.516 [469/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:49.516 [470/710] Linking target lib/librte_mbuf.so.24.0 00:01:49.776 [471/710] Linking target lib/librte_rib.so.24.0 00:01:49.777 [472/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:49.777 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.777 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.777 [475/710] Linking static target drivers/librte_mempool_ring.a 00:01:49.777 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.777 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.777 [478/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:49.777 [479/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:50.042 [480/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:50.042 [481/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:50.042 [482/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:50.042 [483/710] Linking target lib/librte_bbdev.so.24.0 00:01:50.042 [484/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:50.042 [485/710] Linking target lib/librte_net.so.24.0 00:01:50.042 [486/710] Linking target lib/librte_compressdev.so.24.0 00:01:50.042 [487/710] Linking target lib/librte_gpudev.so.24.0 00:01:50.042 [488/710] Linking target lib/librte_distributor.so.24.0 00:01:50.042 [489/710] Linking target lib/librte_cryptodev.so.24.0 00:01:50.042 [490/710] Linking target lib/librte_regexdev.so.24.0 00:01:50.042 [491/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:50.042 [492/710] Linking target lib/librte_mldev.so.24.0 00:01:50.042 [493/710] Linking target lib/librte_reorder.so.24.0 00:01:50.042 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:50.042 [495/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:50.042 [496/710] Linking target lib/librte_fib.so.24.0 00:01:50.042 [497/710] Linking target lib/librte_sched.so.24.0 00:01:50.042 [498/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:50.042 [499/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:50.042 [500/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.307 [501/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:50.307 [502/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:50.307 [503/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:50.307 [504/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:50.307 [505/710] Linking target lib/librte_hash.so.24.0 00:01:50.307 [506/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:50.307 [507/710] Linking target lib/librte_cmdline.so.24.0 00:01:50.307 [508/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.307 [509/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:50.307 [510/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:50.307 [511/710] Linking target lib/librte_security.so.24.0 00:01:50.572 [512/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:50.572 [513/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:50.572 [514/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:50.572 [515/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:50.572 [516/710] Linking target lib/librte_efd.so.24.0 00:01:50.572 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:50.572 [518/710] Linking target lib/librte_lpm.so.24.0 00:01:50.572 [519/710] Linking target lib/librte_member.so.24.0 00:01:50.834 [520/710] Linking target lib/librte_ipsec.so.24.0 00:01:50.834 [521/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:50.834 [522/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:50.834 [523/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:50.834 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:50.834 [525/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:50.834 [526/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:51.096 [527/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:51.096 [528/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:51.096 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:51.096 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:51.096 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:51.357 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:51.357 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:51.357 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:51.621 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:51.621 [536/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:51.621 [537/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:51.621 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:51.621 [539/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:51.621 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:51.621 [541/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:52.196 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:52.196 [543/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:52.196 [544/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:52.196 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:52.196 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:52.196 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:52.196 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:52.196 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.457 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.457 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:52.457 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:52.457 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:52.457 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:52.457 [555/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:52.720 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:52.720 [557/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:52.720 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:52.984 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:52.984 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:53.557 [561/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:53.557 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:53.557 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:53.557 [564/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:53.557 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:53.557 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.557 [567/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:53.557 [568/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:53.840 [569/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:53.840 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:53.840 [571/710] Linking target lib/librte_ethdev.so.24.0 00:01:53.840 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:53.840 [573/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:53.840 [574/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:53.840 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:54.100 [576/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:54.100 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:54.100 [578/710] Linking target lib/librte_metrics.so.24.0 00:01:54.100 [579/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:54.100 [580/710] Linking target lib/librte_bpf.so.24.0 00:01:54.100 [581/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:54.100 [582/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:54.100 [583/710] Linking target lib/librte_eventdev.so.24.0 00:01:54.362 [584/710] Linking target lib/librte_gro.so.24.0 00:01:54.362 [585/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:54.362 [586/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:54.362 [587/710] Linking target lib/librte_gso.so.24.0 00:01:54.362 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:54.362 [589/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:54.362 [590/710] Linking target lib/librte_pcapng.so.24.0 00:01:54.362 [591/710] Linking static target lib/librte_pdcp.a 00:01:54.362 [592/710] Linking target lib/librte_ip_frag.so.24.0 00:01:54.362 [593/710] Linking target lib/librte_latencystats.so.24.0 00:01:54.362 [594/710] Linking target lib/librte_bitratestats.so.24.0 00:01:54.362 [595/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:54.362 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:54.362 [597/710] Linking target lib/librte_power.so.24.0 00:01:54.362 [598/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:54.362 [599/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:54.625 [600/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:54.625 [601/710] Linking target lib/librte_dispatcher.so.24.0 00:01:54.625 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:54.625 [603/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:54.625 [604/710] Linking target lib/librte_pdump.so.24.0 00:01:54.625 [605/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:54.625 [606/710] Linking target lib/librte_port.so.24.0 00:01:54.625 [607/710] Linking target lib/librte_graph.so.24.0 00:01:54.885 [608/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:54.885 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:54.885 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:54.885 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:54.885 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:54.885 [613/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.885 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:54.885 [615/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:54.885 [616/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:54.885 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.885 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.885 [619/710] Linking target lib/librte_pdcp.so.24.0 00:01:55.147 [620/710] Linking target lib/librte_table.so.24.0 00:01:55.147 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:55.408 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:55.408 [623/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:55.408 [624/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:55.408 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:55.408 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:55.408 [627/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:55.689 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:55.689 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.689 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:55.948 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:56.207 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:56.207 [633/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:56.207 [634/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:56.207 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:56.207 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:56.207 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:56.207 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:56.207 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:56.207 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:56.465 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:56.465 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:56.723 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:56.723 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:56.723 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:56.723 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:56.723 [647/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:56.723 [648/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:56.981 [649/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:56.981 [650/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:57.240 [651/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:57.240 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:57.240 [653/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:57.240 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.240 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:57.498 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:57.498 [657/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:57.498 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:57.757 [659/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:57.757 [660/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:57.757 [661/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:57.757 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.757 [663/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.757 [664/710] Linking static target drivers/librte_net_i40e.a 00:01:58.015 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:58.015 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:58.273 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:58.273 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.273 [669/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:58.531 [670/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:58.788 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:58.788 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:58.788 [673/710] Linking static target lib/librte_node.a 00:01:59.046 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:59.046 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.303 [676/710] Linking target lib/librte_node.so.24.0 00:02:00.237 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:00.803 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:00.803 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:02.701 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:02.958 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.518 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.108 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.108 [684/710] Linking static target lib/librte_vhost.a 00:02:37.483 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.483 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:55.560 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:55.560 [688/710] Linking static target lib/librte_pipeline.a 00:02:55.560 [689/710] Linking target app/dpdk-dumpcap 00:02:55.560 [690/710] Linking target app/dpdk-test-regex 00:02:55.560 [691/710] Linking target app/dpdk-test-gpudev 00:02:55.560 [692/710] Linking target app/dpdk-test-sad 00:02:55.560 [693/710] Linking target app/dpdk-pdump 00:02:55.560 [694/710] Linking target app/dpdk-graph 00:02:55.560 [695/710] Linking target app/dpdk-test-pipeline 00:02:55.560 [696/710] Linking target app/dpdk-proc-info 00:02:55.560 [697/710] Linking target app/dpdk-test-acl 00:02:55.560 [698/710] Linking target app/dpdk-test-mldev 00:02:55.560 [699/710] Linking target app/dpdk-test-cmdline 00:02:55.560 [700/710] Linking target app/dpdk-test-flow-perf 00:02:55.560 [701/710] Linking target app/dpdk-test-dma-perf 00:02:55.560 [702/710] Linking target app/dpdk-test-security-perf 00:02:55.560 [703/710] Linking target app/dpdk-test-bbdev 00:02:55.560 [704/710] Linking target app/dpdk-test-crypto-perf 00:02:55.560 [705/710] Linking target app/dpdk-test-fib 00:02:55.560 [706/710] Linking target app/dpdk-test-eventdev 00:02:55.560 [707/710] Linking target app/dpdk-test-compress-perf 00:02:55.560 [708/710] Linking target app/dpdk-testpmd 00:02:57.460 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.719 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:57.719 18:34:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:57.719 18:34:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:57.719 18:34:45 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:57.719 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:57.719 [0/1] Installing files. 00:02:57.982 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:57.982 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.983 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:57.986 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.246 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.247 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.818 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.818 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.818 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.818 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:58.818 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.818 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.819 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.820 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.821 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:58.821 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:58.821 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:58.821 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:58.821 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:58.822 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:58.822 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:58.822 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:58.822 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:58.822 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:58.822 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:58.822 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:58.822 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:58.822 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:58.822 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:58.822 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:58.822 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:58.822 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:58.822 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:58.822 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:58.822 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:58.822 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:58.822 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:58.822 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:58.822 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:58.822 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:58.822 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:58.822 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:58.822 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:58.822 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:58.822 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:58.822 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:58.822 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:58.822 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:58.822 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:58.822 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:58.822 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:58.822 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:58.822 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:58.822 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:58.822 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:58.822 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:58.822 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:58.822 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:58.822 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:58.822 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:58.822 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:58.822 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:58.822 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:58.822 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:58.822 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:58.822 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:58.822 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:58.822 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:58.822 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:58.822 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:58.822 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:58.822 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:58.822 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:58.822 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:58.822 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:58.822 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:58.822 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:58.822 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:58.822 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:58.822 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:58.822 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:58.822 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:58.822 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:58.822 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:58.822 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:58.822 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:58.822 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:58.822 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:58.822 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:58.822 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:58.822 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:58.822 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:58.822 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:58.822 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:58.822 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:58.822 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:58.822 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:58.822 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:58.822 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:58.822 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:58.822 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:58.822 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:58.822 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:58.822 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:58.822 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:58.822 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:58.822 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:58.822 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:58.822 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:58.822 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:58.822 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:58.822 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:58.822 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:58.822 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:58.822 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:58.822 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:58.822 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:58.822 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:58.822 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:58.822 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:58.822 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:58.822 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:58.822 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:58.823 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:58.823 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:58.823 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:58.823 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:58.823 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:58.823 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:58.823 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:58.823 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:58.823 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:58.823 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:58.823 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:58.823 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:58.823 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:58.823 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:58.823 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:58.823 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:58.823 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:58.823 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:58.823 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:58.823 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:58.823 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:58.823 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:58.823 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:58.823 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:58.823 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:58.823 18:34:46 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:58.823 18:34:46 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:58.823 00:02:58.823 real 1m29.886s 00:02:58.823 user 17m59.008s 00:02:58.823 sys 2m6.262s 00:02:58.823 18:34:46 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:58.823 18:34:46 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:58.823 ************************************ 00:02:58.823 END TEST build_native_dpdk 00:02:58.823 ************************************ 00:02:58.823 18:34:46 -- common/autotest_common.sh@1142 -- $ return 0 00:02:58.823 18:34:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:58.823 18:34:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:58.823 18:34:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:58.823 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:59.081 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.081 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.081 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:59.339 Using 'verbs' RDMA provider 00:03:09.906 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:18.016 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:18.274 Creating mk/config.mk...done. 00:03:18.274 Creating mk/cc.flags.mk...done. 00:03:18.274 Type 'make' to build. 00:03:18.274 18:35:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:18.274 18:35:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:18.274 18:35:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:18.274 18:35:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.274 ************************************ 00:03:18.274 START TEST make 00:03:18.274 ************************************ 00:03:18.274 18:35:06 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:18.533 make[1]: Nothing to be done for 'all'. 00:03:20.448 The Meson build system 00:03:20.448 Version: 1.3.1 00:03:20.448 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:20.448 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.448 Build type: native build 00:03:20.448 Project name: libvfio-user 00:03:20.448 Project version: 0.0.1 00:03:20.448 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:20.448 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:20.448 Host machine cpu family: x86_64 00:03:20.448 Host machine cpu: x86_64 00:03:20.448 Run-time dependency threads found: YES 00:03:20.448 Library dl found: YES 00:03:20.448 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:20.448 Run-time dependency json-c found: YES 0.17 00:03:20.448 Run-time dependency cmocka found: YES 1.1.7 00:03:20.448 Program pytest-3 found: NO 00:03:20.448 Program flake8 found: NO 00:03:20.448 Program misspell-fixer found: NO 00:03:20.448 Program restructuredtext-lint found: NO 00:03:20.448 Program valgrind found: YES (/usr/bin/valgrind) 00:03:20.448 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.448 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.448 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.448 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.448 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:20.448 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:20.448 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.448 Build targets in project: 8 00:03:20.448 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:20.448 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:20.448 00:03:20.448 libvfio-user 0.0.1 00:03:20.448 00:03:20.448 User defined options 00:03:20.448 buildtype : debug 00:03:20.448 default_library: shared 00:03:20.448 libdir : /usr/local/lib 00:03:20.448 00:03:20.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.041 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:21.041 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:21.041 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:21.041 [3/37] Compiling C object samples/null.p/null.c.o 00:03:21.041 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:21.041 [5/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:21.041 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:21.343 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:21.343 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:21.343 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:21.343 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:21.343 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:21.343 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:21.343 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:21.343 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:21.343 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:21.343 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:21.343 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:21.343 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:21.343 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:21.343 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:21.343 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:21.343 [22/37] Compiling C object samples/server.p/server.c.o 00:03:21.343 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:21.343 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:21.343 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:21.343 [26/37] Compiling C object samples/client.p/client.c.o 00:03:21.343 [27/37] Linking target samples/client 00:03:21.343 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:21.640 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:21.640 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:21.640 [31/37] Linking target test/unit_tests 00:03:21.640 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:21.907 [33/37] Linking target samples/null 00:03:21.907 [34/37] Linking target samples/gpio-pci-idio-16 00:03:21.907 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:21.907 [36/37] Linking target samples/server 00:03:21.907 [37/37] Linking target samples/lspci 00:03:21.907 INFO: autodetecting backend as ninja 00:03:21.907 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:21.907 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:22.484 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:22.484 ninja: no work to do. 00:03:34.684 CC lib/ut/ut.o 00:03:34.684 CC lib/ut_mock/mock.o 00:03:34.684 CC lib/log/log.o 00:03:34.684 CC lib/log/log_flags.o 00:03:34.684 CC lib/log/log_deprecated.o 00:03:34.684 LIB libspdk_ut.a 00:03:34.684 LIB libspdk_log.a 00:03:34.684 LIB libspdk_ut_mock.a 00:03:34.684 SO libspdk_ut.so.2.0 00:03:34.684 SO libspdk_ut_mock.so.6.0 00:03:34.684 SO libspdk_log.so.7.0 00:03:34.684 SYMLINK libspdk_ut.so 00:03:34.684 SYMLINK libspdk_ut_mock.so 00:03:34.684 SYMLINK libspdk_log.so 00:03:34.684 CC lib/ioat/ioat.o 00:03:34.684 CXX lib/trace_parser/trace.o 00:03:34.684 CC lib/dma/dma.o 00:03:34.684 CC lib/util/base64.o 00:03:34.684 CC lib/util/bit_array.o 00:03:34.684 CC lib/util/cpuset.o 00:03:34.684 CC lib/util/crc16.o 00:03:34.684 CC lib/util/crc32.o 00:03:34.684 CC lib/util/crc32c.o 00:03:34.684 CC lib/util/crc32_ieee.o 00:03:34.684 CC lib/util/crc64.o 00:03:34.684 CC lib/util/dif.o 00:03:34.684 CC lib/util/fd.o 00:03:34.684 CC lib/util/file.o 00:03:34.684 CC lib/util/hexlify.o 00:03:34.684 CC lib/util/iov.o 00:03:34.684 CC lib/util/math.o 00:03:34.684 CC lib/util/pipe.o 00:03:34.684 CC lib/util/strerror_tls.o 00:03:34.685 CC lib/util/string.o 00:03:34.685 CC lib/util/uuid.o 00:03:34.685 CC lib/util/fd_group.o 00:03:34.685 CC lib/util/xor.o 00:03:34.685 CC lib/util/zipf.o 00:03:34.685 CC lib/vfio_user/host/vfio_user_pci.o 00:03:34.685 CC lib/vfio_user/host/vfio_user.o 00:03:34.685 LIB libspdk_dma.a 00:03:34.685 SO libspdk_dma.so.4.0 00:03:34.685 SYMLINK libspdk_dma.so 00:03:34.685 LIB libspdk_ioat.a 00:03:34.685 SO libspdk_ioat.so.7.0 00:03:34.685 SYMLINK libspdk_ioat.so 00:03:34.685 LIB libspdk_vfio_user.a 00:03:34.685 SO libspdk_vfio_user.so.5.0 00:03:34.685 SYMLINK libspdk_vfio_user.so 00:03:34.685 LIB libspdk_util.a 00:03:34.685 SO libspdk_util.so.9.1 00:03:34.685 SYMLINK libspdk_util.so 00:03:34.943 CC lib/json/json_parse.o 00:03:34.943 CC lib/conf/conf.o 00:03:34.943 CC lib/idxd/idxd.o 00:03:34.943 CC lib/json/json_util.o 00:03:34.943 CC lib/rdma_provider/common.o 00:03:34.943 CC lib/idxd/idxd_user.o 00:03:34.943 CC lib/json/json_write.o 00:03:34.943 CC lib/idxd/idxd_kernel.o 00:03:34.943 CC lib/env_dpdk/env.o 00:03:34.943 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:34.943 CC lib/env_dpdk/memory.o 00:03:34.943 CC lib/env_dpdk/pci.o 00:03:34.943 CC lib/vmd/vmd.o 00:03:34.943 CC lib/env_dpdk/init.o 00:03:34.943 CC lib/rdma_utils/rdma_utils.o 00:03:34.943 CC lib/vmd/led.o 00:03:34.943 CC lib/env_dpdk/threads.o 00:03:34.943 CC lib/env_dpdk/pci_ioat.o 00:03:34.943 CC lib/env_dpdk/pci_virtio.o 00:03:34.943 CC lib/env_dpdk/pci_vmd.o 00:03:34.943 CC lib/env_dpdk/pci_idxd.o 00:03:34.943 CC lib/env_dpdk/pci_event.o 00:03:34.943 CC lib/env_dpdk/sigbus_handler.o 00:03:34.943 CC lib/env_dpdk/pci_dpdk.o 00:03:34.943 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:34.943 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:34.943 LIB libspdk_trace_parser.a 00:03:34.943 SO libspdk_trace_parser.so.5.0 00:03:35.200 SYMLINK libspdk_trace_parser.so 00:03:35.200 LIB libspdk_conf.a 00:03:35.200 SO libspdk_conf.so.6.0 00:03:35.200 LIB libspdk_rdma_provider.a 00:03:35.200 LIB libspdk_rdma_utils.a 00:03:35.200 LIB libspdk_json.a 00:03:35.200 SO libspdk_rdma_provider.so.6.0 00:03:35.200 SO libspdk_rdma_utils.so.1.0 00:03:35.200 SYMLINK libspdk_conf.so 00:03:35.200 SO libspdk_json.so.6.0 00:03:35.458 SYMLINK libspdk_rdma_provider.so 00:03:35.458 SYMLINK libspdk_rdma_utils.so 00:03:35.458 SYMLINK libspdk_json.so 00:03:35.458 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.458 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.458 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.458 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:35.716 LIB libspdk_idxd.a 00:03:35.716 SO libspdk_idxd.so.12.0 00:03:35.716 LIB libspdk_vmd.a 00:03:35.716 SO libspdk_vmd.so.6.0 00:03:35.716 SYMLINK libspdk_idxd.so 00:03:35.716 SYMLINK libspdk_vmd.so 00:03:35.716 LIB libspdk_jsonrpc.a 00:03:35.716 SO libspdk_jsonrpc.so.6.0 00:03:35.974 SYMLINK libspdk_jsonrpc.so 00:03:35.974 CC lib/rpc/rpc.o 00:03:36.232 LIB libspdk_rpc.a 00:03:36.232 SO libspdk_rpc.so.6.0 00:03:36.490 SYMLINK libspdk_rpc.so 00:03:36.490 CC lib/keyring/keyring.o 00:03:36.490 CC lib/trace/trace.o 00:03:36.490 CC lib/trace/trace_flags.o 00:03:36.490 CC lib/keyring/keyring_rpc.o 00:03:36.490 CC lib/trace/trace_rpc.o 00:03:36.490 CC lib/notify/notify.o 00:03:36.490 CC lib/notify/notify_rpc.o 00:03:36.748 LIB libspdk_notify.a 00:03:36.748 SO libspdk_notify.so.6.0 00:03:36.748 LIB libspdk_keyring.a 00:03:36.748 SYMLINK libspdk_notify.so 00:03:36.748 LIB libspdk_trace.a 00:03:36.748 SO libspdk_keyring.so.1.0 00:03:36.748 SO libspdk_trace.so.10.0 00:03:36.748 SYMLINK libspdk_keyring.so 00:03:36.748 SYMLINK libspdk_trace.so 00:03:37.006 LIB libspdk_env_dpdk.a 00:03:37.007 SO libspdk_env_dpdk.so.14.1 00:03:37.007 CC lib/sock/sock.o 00:03:37.007 CC lib/sock/sock_rpc.o 00:03:37.007 CC lib/thread/thread.o 00:03:37.007 CC lib/thread/iobuf.o 00:03:37.264 SYMLINK libspdk_env_dpdk.so 00:03:37.523 LIB libspdk_sock.a 00:03:37.523 SO libspdk_sock.so.10.0 00:03:37.523 SYMLINK libspdk_sock.so 00:03:37.781 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.781 CC lib/nvme/nvme_ctrlr.o 00:03:37.781 CC lib/nvme/nvme_fabric.o 00:03:37.781 CC lib/nvme/nvme_ns_cmd.o 00:03:37.781 CC lib/nvme/nvme_ns.o 00:03:37.781 CC lib/nvme/nvme_pcie_common.o 00:03:37.781 CC lib/nvme/nvme_pcie.o 00:03:37.781 CC lib/nvme/nvme_qpair.o 00:03:37.781 CC lib/nvme/nvme.o 00:03:37.781 CC lib/nvme/nvme_quirks.o 00:03:37.781 CC lib/nvme/nvme_transport.o 00:03:37.781 CC lib/nvme/nvme_discovery.o 00:03:37.781 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.781 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.781 CC lib/nvme/nvme_tcp.o 00:03:37.781 CC lib/nvme/nvme_opal.o 00:03:37.781 CC lib/nvme/nvme_io_msg.o 00:03:37.781 CC lib/nvme/nvme_poll_group.o 00:03:37.781 CC lib/nvme/nvme_zns.o 00:03:37.781 CC lib/nvme/nvme_stubs.o 00:03:37.781 CC lib/nvme/nvme_auth.o 00:03:37.781 CC lib/nvme/nvme_cuse.o 00:03:37.781 CC lib/nvme/nvme_rdma.o 00:03:37.781 CC lib/nvme/nvme_vfio_user.o 00:03:38.716 LIB libspdk_thread.a 00:03:38.716 SO libspdk_thread.so.10.1 00:03:38.716 SYMLINK libspdk_thread.so 00:03:38.973 CC lib/init/json_config.o 00:03:38.973 CC lib/virtio/virtio.o 00:03:38.973 CC lib/vfu_tgt/tgt_endpoint.o 00:03:38.973 CC lib/blob/blobstore.o 00:03:38.973 CC lib/accel/accel.o 00:03:38.973 CC lib/init/subsystem.o 00:03:38.973 CC lib/vfu_tgt/tgt_rpc.o 00:03:38.973 CC lib/virtio/virtio_vhost_user.o 00:03:38.973 CC lib/blob/request.o 00:03:38.973 CC lib/accel/accel_rpc.o 00:03:38.973 CC lib/init/subsystem_rpc.o 00:03:38.973 CC lib/virtio/virtio_vfio_user.o 00:03:38.973 CC lib/blob/zeroes.o 00:03:38.973 CC lib/init/rpc.o 00:03:38.973 CC lib/virtio/virtio_pci.o 00:03:38.973 CC lib/accel/accel_sw.o 00:03:38.973 CC lib/blob/blob_bs_dev.o 00:03:39.231 LIB libspdk_init.a 00:03:39.231 SO libspdk_init.so.5.0 00:03:39.231 LIB libspdk_virtio.a 00:03:39.231 LIB libspdk_vfu_tgt.a 00:03:39.231 SYMLINK libspdk_init.so 00:03:39.231 SO libspdk_vfu_tgt.so.3.0 00:03:39.231 SO libspdk_virtio.so.7.0 00:03:39.231 SYMLINK libspdk_vfu_tgt.so 00:03:39.231 SYMLINK libspdk_virtio.so 00:03:39.489 CC lib/event/app.o 00:03:39.489 CC lib/event/reactor.o 00:03:39.489 CC lib/event/log_rpc.o 00:03:39.489 CC lib/event/app_rpc.o 00:03:39.489 CC lib/event/scheduler_static.o 00:03:39.747 LIB libspdk_event.a 00:03:39.747 SO libspdk_event.so.14.0 00:03:40.005 LIB libspdk_accel.a 00:03:40.005 SYMLINK libspdk_event.so 00:03:40.005 SO libspdk_accel.so.15.1 00:03:40.005 SYMLINK libspdk_accel.so 00:03:40.005 LIB libspdk_nvme.a 00:03:40.263 SO libspdk_nvme.so.13.1 00:03:40.263 CC lib/bdev/bdev.o 00:03:40.263 CC lib/bdev/bdev_rpc.o 00:03:40.263 CC lib/bdev/bdev_zone.o 00:03:40.263 CC lib/bdev/part.o 00:03:40.263 CC lib/bdev/scsi_nvme.o 00:03:40.520 SYMLINK libspdk_nvme.so 00:03:41.893 LIB libspdk_blob.a 00:03:41.893 SO libspdk_blob.so.11.0 00:03:41.893 SYMLINK libspdk_blob.so 00:03:42.150 CC lib/blobfs/blobfs.o 00:03:42.150 CC lib/blobfs/tree.o 00:03:42.150 CC lib/lvol/lvol.o 00:03:42.716 LIB libspdk_bdev.a 00:03:42.716 SO libspdk_bdev.so.15.1 00:03:42.716 SYMLINK libspdk_bdev.so 00:03:42.980 LIB libspdk_blobfs.a 00:03:42.980 SO libspdk_blobfs.so.10.0 00:03:42.980 SYMLINK libspdk_blobfs.so 00:03:42.980 CC lib/nbd/nbd.o 00:03:42.980 CC lib/ublk/ublk.o 00:03:42.980 CC lib/scsi/dev.o 00:03:42.980 CC lib/nvmf/ctrlr.o 00:03:42.980 CC lib/scsi/lun.o 00:03:42.980 CC lib/nbd/nbd_rpc.o 00:03:42.980 CC lib/ublk/ublk_rpc.o 00:03:42.980 CC lib/ftl/ftl_core.o 00:03:42.980 CC lib/nvmf/ctrlr_discovery.o 00:03:42.980 CC lib/scsi/port.o 00:03:42.980 CC lib/nvmf/ctrlr_bdev.o 00:03:42.980 CC lib/ftl/ftl_init.o 00:03:42.980 CC lib/scsi/scsi.o 00:03:42.980 CC lib/nvmf/subsystem.o 00:03:42.980 CC lib/scsi/scsi_bdev.o 00:03:42.980 CC lib/ftl/ftl_layout.o 00:03:42.980 CC lib/ftl/ftl_debug.o 00:03:42.980 CC lib/nvmf/nvmf.o 00:03:42.980 CC lib/scsi/scsi_pr.o 00:03:42.980 CC lib/nvmf/nvmf_rpc.o 00:03:42.980 CC lib/ftl/ftl_io.o 00:03:42.980 CC lib/scsi/scsi_rpc.o 00:03:42.980 CC lib/nvmf/transport.o 00:03:42.980 CC lib/ftl/ftl_sb.o 00:03:42.980 CC lib/scsi/task.o 00:03:42.980 CC lib/ftl/ftl_l2p.o 00:03:42.980 CC lib/nvmf/tcp.o 00:03:42.980 CC lib/nvmf/stubs.o 00:03:42.980 CC lib/nvmf/mdns_server.o 00:03:42.980 CC lib/ftl/ftl_l2p_flat.o 00:03:42.980 CC lib/ftl/ftl_nv_cache.o 00:03:42.980 CC lib/nvmf/vfio_user.o 00:03:42.980 CC lib/ftl/ftl_band.o 00:03:42.980 CC lib/nvmf/rdma.o 00:03:42.980 CC lib/nvmf/auth.o 00:03:42.980 CC lib/ftl/ftl_band_ops.o 00:03:42.980 CC lib/ftl/ftl_writer.o 00:03:42.980 CC lib/ftl/ftl_rq.o 00:03:42.980 CC lib/ftl/ftl_reloc.o 00:03:42.980 CC lib/ftl/ftl_l2p_cache.o 00:03:42.980 CC lib/ftl/ftl_p2l.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.980 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.980 LIB libspdk_lvol.a 00:03:42.980 SO libspdk_lvol.so.10.0 00:03:43.237 SYMLINK libspdk_lvol.so 00:03:43.237 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:43.237 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:43.499 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:43.499 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.499 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.499 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.499 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.499 CC lib/ftl/utils/ftl_conf.o 00:03:43.499 CC lib/ftl/utils/ftl_md.o 00:03:43.499 CC lib/ftl/utils/ftl_mempool.o 00:03:43.499 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.499 CC lib/ftl/utils/ftl_property.o 00:03:43.499 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.499 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:43.499 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.499 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.758 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:43.758 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:43.758 CC lib/ftl/base/ftl_base_dev.o 00:03:43.758 CC lib/ftl/base/ftl_base_bdev.o 00:03:43.758 CC lib/ftl/ftl_trace.o 00:03:43.758 LIB libspdk_nbd.a 00:03:43.758 SO libspdk_nbd.so.7.0 00:03:44.016 SYMLINK libspdk_nbd.so 00:03:44.016 LIB libspdk_scsi.a 00:03:44.016 SO libspdk_scsi.so.9.0 00:03:44.016 LIB libspdk_ublk.a 00:03:44.016 SYMLINK libspdk_scsi.so 00:03:44.016 SO libspdk_ublk.so.3.0 00:03:44.273 SYMLINK libspdk_ublk.so 00:03:44.273 CC lib/iscsi/conn.o 00:03:44.273 CC lib/vhost/vhost.o 00:03:44.273 CC lib/iscsi/init_grp.o 00:03:44.273 CC lib/vhost/vhost_rpc.o 00:03:44.273 CC lib/iscsi/iscsi.o 00:03:44.273 CC lib/iscsi/md5.o 00:03:44.273 CC lib/vhost/vhost_scsi.o 00:03:44.273 CC lib/iscsi/param.o 00:03:44.273 CC lib/vhost/vhost_blk.o 00:03:44.273 CC lib/iscsi/portal_grp.o 00:03:44.273 CC lib/vhost/rte_vhost_user.o 00:03:44.273 CC lib/iscsi/tgt_node.o 00:03:44.273 CC lib/iscsi/iscsi_subsystem.o 00:03:44.273 CC lib/iscsi/iscsi_rpc.o 00:03:44.273 CC lib/iscsi/task.o 00:03:44.530 LIB libspdk_ftl.a 00:03:44.530 SO libspdk_ftl.so.9.0 00:03:45.125 SYMLINK libspdk_ftl.so 00:03:45.383 LIB libspdk_vhost.a 00:03:45.642 SO libspdk_vhost.so.8.0 00:03:45.642 LIB libspdk_nvmf.a 00:03:45.642 SYMLINK libspdk_vhost.so 00:03:45.642 SO libspdk_nvmf.so.18.1 00:03:45.642 LIB libspdk_iscsi.a 00:03:45.937 SO libspdk_iscsi.so.8.0 00:03:45.937 SYMLINK libspdk_nvmf.so 00:03:45.937 SYMLINK libspdk_iscsi.so 00:03:46.194 CC module/env_dpdk/env_dpdk_rpc.o 00:03:46.194 CC module/vfu_device/vfu_virtio.o 00:03:46.194 CC module/vfu_device/vfu_virtio_blk.o 00:03:46.194 CC module/vfu_device/vfu_virtio_scsi.o 00:03:46.194 CC module/vfu_device/vfu_virtio_rpc.o 00:03:46.194 CC module/accel/ioat/accel_ioat.o 00:03:46.194 CC module/keyring/linux/keyring.o 00:03:46.194 CC module/accel/error/accel_error.o 00:03:46.194 CC module/accel/iaa/accel_iaa.o 00:03:46.194 CC module/accel/dsa/accel_dsa.o 00:03:46.194 CC module/accel/error/accel_error_rpc.o 00:03:46.194 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:46.194 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:46.194 CC module/blob/bdev/blob_bdev.o 00:03:46.194 CC module/keyring/linux/keyring_rpc.o 00:03:46.194 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.194 CC module/accel/ioat/accel_ioat_rpc.o 00:03:46.194 CC module/accel/dsa/accel_dsa_rpc.o 00:03:46.194 CC module/accel/iaa/accel_iaa_rpc.o 00:03:46.194 CC module/keyring/file/keyring.o 00:03:46.194 CC module/sock/posix/posix.o 00:03:46.194 CC module/keyring/file/keyring_rpc.o 00:03:46.452 LIB libspdk_env_dpdk_rpc.a 00:03:46.452 SO libspdk_env_dpdk_rpc.so.6.0 00:03:46.452 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.452 LIB libspdk_keyring_file.a 00:03:46.452 LIB libspdk_keyring_linux.a 00:03:46.452 LIB libspdk_scheduler_gscheduler.a 00:03:46.452 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.452 SO libspdk_keyring_file.so.1.0 00:03:46.452 SO libspdk_keyring_linux.so.1.0 00:03:46.452 SO libspdk_scheduler_gscheduler.so.4.0 00:03:46.452 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:46.452 LIB libspdk_accel_error.a 00:03:46.452 LIB libspdk_scheduler_dynamic.a 00:03:46.452 LIB libspdk_accel_ioat.a 00:03:46.452 LIB libspdk_accel_iaa.a 00:03:46.452 SO libspdk_scheduler_dynamic.so.4.0 00:03:46.452 SO libspdk_accel_error.so.2.0 00:03:46.452 SO libspdk_accel_ioat.so.6.0 00:03:46.452 SYMLINK libspdk_keyring_file.so 00:03:46.452 SYMLINK libspdk_keyring_linux.so 00:03:46.452 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.452 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.452 SO libspdk_accel_iaa.so.3.0 00:03:46.710 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.710 LIB libspdk_accel_dsa.a 00:03:46.710 SYMLINK libspdk_accel_ioat.so 00:03:46.710 SYMLINK libspdk_accel_error.so 00:03:46.710 LIB libspdk_blob_bdev.a 00:03:46.710 SO libspdk_accel_dsa.so.5.0 00:03:46.710 SYMLINK libspdk_accel_iaa.so 00:03:46.710 SO libspdk_blob_bdev.so.11.0 00:03:46.710 SYMLINK libspdk_blob_bdev.so 00:03:46.710 SYMLINK libspdk_accel_dsa.so 00:03:46.970 LIB libspdk_vfu_device.a 00:03:46.970 SO libspdk_vfu_device.so.3.0 00:03:46.970 CC module/bdev/gpt/gpt.o 00:03:46.970 CC module/bdev/error/vbdev_error.o 00:03:46.970 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.970 CC module/bdev/delay/vbdev_delay.o 00:03:46.970 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.970 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.970 CC module/bdev/passthru/vbdev_passthru.o 00:03:46.970 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.970 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.970 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.970 CC module/bdev/null/bdev_null.o 00:03:46.970 CC module/bdev/null/bdev_null_rpc.o 00:03:46.970 CC module/bdev/nvme/bdev_nvme.o 00:03:46.970 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.970 CC module/bdev/nvme/nvme_rpc.o 00:03:46.970 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.970 CC module/bdev/malloc/bdev_malloc.o 00:03:46.970 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.970 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.970 CC module/bdev/nvme/vbdev_opal.o 00:03:46.970 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.970 CC module/bdev/raid/bdev_raid.o 00:03:46.970 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:46.970 CC module/bdev/iscsi/bdev_iscsi.o 00:03:46.970 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:46.970 CC module/bdev/ftl/bdev_ftl.o 00:03:46.970 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:46.970 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.970 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.970 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.970 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:46.970 CC module/bdev/raid/raid0.o 00:03:46.970 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:46.970 CC module/bdev/raid/raid1.o 00:03:46.970 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:46.970 CC module/bdev/raid/concat.o 00:03:46.970 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:46.970 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:46.970 CC module/bdev/aio/bdev_aio.o 00:03:46.970 CC module/bdev/split/vbdev_split.o 00:03:46.970 CC module/bdev/aio/bdev_aio_rpc.o 00:03:46.970 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.970 SYMLINK libspdk_vfu_device.so 00:03:47.228 LIB libspdk_sock_posix.a 00:03:47.228 SO libspdk_sock_posix.so.6.0 00:03:47.228 SYMLINK libspdk_sock_posix.so 00:03:47.228 LIB libspdk_bdev_split.a 00:03:47.228 LIB libspdk_blobfs_bdev.a 00:03:47.228 SO libspdk_bdev_split.so.6.0 00:03:47.228 SO libspdk_blobfs_bdev.so.6.0 00:03:47.485 SYMLINK libspdk_bdev_split.so 00:03:47.486 LIB libspdk_bdev_error.a 00:03:47.486 SYMLINK libspdk_blobfs_bdev.so 00:03:47.486 LIB libspdk_bdev_passthru.a 00:03:47.486 LIB libspdk_bdev_null.a 00:03:47.486 SO libspdk_bdev_error.so.6.0 00:03:47.486 LIB libspdk_bdev_gpt.a 00:03:47.486 SO libspdk_bdev_passthru.so.6.0 00:03:47.486 SO libspdk_bdev_null.so.6.0 00:03:47.486 SO libspdk_bdev_gpt.so.6.0 00:03:47.486 LIB libspdk_bdev_zone_block.a 00:03:47.486 SYMLINK libspdk_bdev_error.so 00:03:47.486 LIB libspdk_bdev_ftl.a 00:03:47.486 SO libspdk_bdev_zone_block.so.6.0 00:03:47.486 LIB libspdk_bdev_malloc.a 00:03:47.486 SYMLINK libspdk_bdev_null.so 00:03:47.486 SYMLINK libspdk_bdev_gpt.so 00:03:47.486 SYMLINK libspdk_bdev_passthru.so 00:03:47.486 SO libspdk_bdev_ftl.so.6.0 00:03:47.486 LIB libspdk_bdev_iscsi.a 00:03:47.486 LIB libspdk_bdev_aio.a 00:03:47.486 SO libspdk_bdev_malloc.so.6.0 00:03:47.486 SYMLINK libspdk_bdev_zone_block.so 00:03:47.486 SO libspdk_bdev_aio.so.6.0 00:03:47.486 SO libspdk_bdev_iscsi.so.6.0 00:03:47.486 LIB libspdk_bdev_delay.a 00:03:47.486 SYMLINK libspdk_bdev_ftl.so 00:03:47.486 SYMLINK libspdk_bdev_malloc.so 00:03:47.486 SO libspdk_bdev_delay.so.6.0 00:03:47.743 SYMLINK libspdk_bdev_aio.so 00:03:47.743 SYMLINK libspdk_bdev_iscsi.so 00:03:47.743 SYMLINK libspdk_bdev_delay.so 00:03:47.743 LIB libspdk_bdev_lvol.a 00:03:47.743 LIB libspdk_bdev_virtio.a 00:03:47.743 SO libspdk_bdev_lvol.so.6.0 00:03:47.743 SO libspdk_bdev_virtio.so.6.0 00:03:47.743 SYMLINK libspdk_bdev_lvol.so 00:03:47.743 SYMLINK libspdk_bdev_virtio.so 00:03:48.002 LIB libspdk_bdev_raid.a 00:03:48.260 SO libspdk_bdev_raid.so.6.0 00:03:48.260 SYMLINK libspdk_bdev_raid.so 00:03:49.211 LIB libspdk_bdev_nvme.a 00:03:49.211 SO libspdk_bdev_nvme.so.7.0 00:03:49.469 SYMLINK libspdk_bdev_nvme.so 00:03:49.726 CC module/event/subsystems/iobuf/iobuf.o 00:03:49.726 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:49.726 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:49.726 CC module/event/subsystems/scheduler/scheduler.o 00:03:49.726 CC module/event/subsystems/keyring/keyring.o 00:03:49.726 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:49.726 CC module/event/subsystems/vmd/vmd.o 00:03:49.726 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:49.726 CC module/event/subsystems/sock/sock.o 00:03:49.985 LIB libspdk_event_keyring.a 00:03:49.985 LIB libspdk_event_vhost_blk.a 00:03:49.985 LIB libspdk_event_scheduler.a 00:03:49.985 LIB libspdk_event_vfu_tgt.a 00:03:49.985 LIB libspdk_event_vmd.a 00:03:49.985 LIB libspdk_event_sock.a 00:03:49.985 LIB libspdk_event_iobuf.a 00:03:49.985 SO libspdk_event_keyring.so.1.0 00:03:49.985 SO libspdk_event_vhost_blk.so.3.0 00:03:49.985 SO libspdk_event_scheduler.so.4.0 00:03:49.985 SO libspdk_event_sock.so.5.0 00:03:49.985 SO libspdk_event_vfu_tgt.so.3.0 00:03:49.985 SO libspdk_event_vmd.so.6.0 00:03:49.985 SO libspdk_event_iobuf.so.3.0 00:03:49.985 SYMLINK libspdk_event_vhost_blk.so 00:03:49.985 SYMLINK libspdk_event_keyring.so 00:03:49.985 SYMLINK libspdk_event_vfu_tgt.so 00:03:49.985 SYMLINK libspdk_event_scheduler.so 00:03:49.985 SYMLINK libspdk_event_sock.so 00:03:49.985 SYMLINK libspdk_event_vmd.so 00:03:49.985 SYMLINK libspdk_event_iobuf.so 00:03:50.243 CC module/event/subsystems/accel/accel.o 00:03:50.243 LIB libspdk_event_accel.a 00:03:50.243 SO libspdk_event_accel.so.6.0 00:03:50.502 SYMLINK libspdk_event_accel.so 00:03:50.502 CC module/event/subsystems/bdev/bdev.o 00:03:50.760 LIB libspdk_event_bdev.a 00:03:50.760 SO libspdk_event_bdev.so.6.0 00:03:50.760 SYMLINK libspdk_event_bdev.so 00:03:51.018 CC module/event/subsystems/scsi/scsi.o 00:03:51.018 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.018 CC module/event/subsystems/nbd/nbd.o 00:03:51.018 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.018 CC module/event/subsystems/ublk/ublk.o 00:03:51.018 LIB libspdk_event_nbd.a 00:03:51.018 LIB libspdk_event_ublk.a 00:03:51.277 LIB libspdk_event_scsi.a 00:03:51.277 SO libspdk_event_nbd.so.6.0 00:03:51.277 SO libspdk_event_ublk.so.3.0 00:03:51.277 SO libspdk_event_scsi.so.6.0 00:03:51.277 SYMLINK libspdk_event_nbd.so 00:03:51.277 SYMLINK libspdk_event_ublk.so 00:03:51.277 SYMLINK libspdk_event_scsi.so 00:03:51.277 LIB libspdk_event_nvmf.a 00:03:51.277 SO libspdk_event_nvmf.so.6.0 00:03:51.277 SYMLINK libspdk_event_nvmf.so 00:03:51.277 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:51.277 CC module/event/subsystems/iscsi/iscsi.o 00:03:51.535 LIB libspdk_event_vhost_scsi.a 00:03:51.535 LIB libspdk_event_iscsi.a 00:03:51.535 SO libspdk_event_vhost_scsi.so.3.0 00:03:51.535 SO libspdk_event_iscsi.so.6.0 00:03:51.535 SYMLINK libspdk_event_vhost_scsi.so 00:03:51.535 SYMLINK libspdk_event_iscsi.so 00:03:51.793 SO libspdk.so.6.0 00:03:51.793 SYMLINK libspdk.so 00:03:52.061 CC app/trace_record/trace_record.o 00:03:52.061 CC app/spdk_top/spdk_top.o 00:03:52.061 CC app/spdk_lspci/spdk_lspci.o 00:03:52.061 CXX app/trace/trace.o 00:03:52.061 TEST_HEADER include/spdk/accel.h 00:03:52.061 TEST_HEADER include/spdk/accel_module.h 00:03:52.061 TEST_HEADER include/spdk/assert.h 00:03:52.061 CC app/spdk_nvme_identify/identify.o 00:03:52.061 TEST_HEADER include/spdk/barrier.h 00:03:52.061 TEST_HEADER include/spdk/base64.h 00:03:52.061 CC app/spdk_nvme_perf/perf.o 00:03:52.061 TEST_HEADER include/spdk/bdev.h 00:03:52.061 TEST_HEADER include/spdk/bdev_module.h 00:03:52.061 CC app/spdk_nvme_discover/discovery_aer.o 00:03:52.061 TEST_HEADER include/spdk/bdev_zone.h 00:03:52.061 CC test/rpc_client/rpc_client_test.o 00:03:52.061 TEST_HEADER include/spdk/bit_array.h 00:03:52.061 TEST_HEADER include/spdk/bit_pool.h 00:03:52.061 TEST_HEADER include/spdk/blob_bdev.h 00:03:52.061 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:52.061 TEST_HEADER include/spdk/blobfs.h 00:03:52.061 TEST_HEADER include/spdk/blob.h 00:03:52.061 TEST_HEADER include/spdk/conf.h 00:03:52.061 TEST_HEADER include/spdk/config.h 00:03:52.061 TEST_HEADER include/spdk/cpuset.h 00:03:52.061 TEST_HEADER include/spdk/crc16.h 00:03:52.061 TEST_HEADER include/spdk/crc32.h 00:03:52.061 TEST_HEADER include/spdk/crc64.h 00:03:52.061 TEST_HEADER include/spdk/dma.h 00:03:52.061 TEST_HEADER include/spdk/dif.h 00:03:52.061 TEST_HEADER include/spdk/endian.h 00:03:52.061 TEST_HEADER include/spdk/env_dpdk.h 00:03:52.061 TEST_HEADER include/spdk/env.h 00:03:52.061 TEST_HEADER include/spdk/event.h 00:03:52.061 TEST_HEADER include/spdk/fd_group.h 00:03:52.061 TEST_HEADER include/spdk/fd.h 00:03:52.061 TEST_HEADER include/spdk/file.h 00:03:52.061 TEST_HEADER include/spdk/ftl.h 00:03:52.061 TEST_HEADER include/spdk/gpt_spec.h 00:03:52.061 TEST_HEADER include/spdk/hexlify.h 00:03:52.061 TEST_HEADER include/spdk/histogram_data.h 00:03:52.061 TEST_HEADER include/spdk/idxd.h 00:03:52.061 TEST_HEADER include/spdk/idxd_spec.h 00:03:52.061 TEST_HEADER include/spdk/init.h 00:03:52.061 TEST_HEADER include/spdk/ioat.h 00:03:52.061 TEST_HEADER include/spdk/ioat_spec.h 00:03:52.061 TEST_HEADER include/spdk/iscsi_spec.h 00:03:52.061 TEST_HEADER include/spdk/json.h 00:03:52.061 TEST_HEADER include/spdk/jsonrpc.h 00:03:52.061 TEST_HEADER include/spdk/keyring.h 00:03:52.061 TEST_HEADER include/spdk/keyring_module.h 00:03:52.061 TEST_HEADER include/spdk/likely.h 00:03:52.061 TEST_HEADER include/spdk/log.h 00:03:52.061 TEST_HEADER include/spdk/lvol.h 00:03:52.061 TEST_HEADER include/spdk/memory.h 00:03:52.061 TEST_HEADER include/spdk/mmio.h 00:03:52.061 TEST_HEADER include/spdk/nbd.h 00:03:52.061 TEST_HEADER include/spdk/nvme.h 00:03:52.061 TEST_HEADER include/spdk/notify.h 00:03:52.061 TEST_HEADER include/spdk/nvme_intel.h 00:03:52.061 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:52.061 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:52.061 TEST_HEADER include/spdk/nvme_spec.h 00:03:52.061 TEST_HEADER include/spdk/nvme_zns.h 00:03:52.061 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:52.061 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:52.061 TEST_HEADER include/spdk/nvmf.h 00:03:52.061 TEST_HEADER include/spdk/nvmf_spec.h 00:03:52.061 TEST_HEADER include/spdk/nvmf_transport.h 00:03:52.061 TEST_HEADER include/spdk/opal.h 00:03:52.061 TEST_HEADER include/spdk/opal_spec.h 00:03:52.061 TEST_HEADER include/spdk/pipe.h 00:03:52.061 TEST_HEADER include/spdk/pci_ids.h 00:03:52.061 TEST_HEADER include/spdk/queue.h 00:03:52.061 TEST_HEADER include/spdk/reduce.h 00:03:52.061 TEST_HEADER include/spdk/rpc.h 00:03:52.061 TEST_HEADER include/spdk/scheduler.h 00:03:52.061 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:52.061 TEST_HEADER include/spdk/scsi.h 00:03:52.061 TEST_HEADER include/spdk/scsi_spec.h 00:03:52.061 TEST_HEADER include/spdk/stdinc.h 00:03:52.061 TEST_HEADER include/spdk/sock.h 00:03:52.061 TEST_HEADER include/spdk/string.h 00:03:52.061 TEST_HEADER include/spdk/thread.h 00:03:52.061 TEST_HEADER include/spdk/trace_parser.h 00:03:52.061 TEST_HEADER include/spdk/trace.h 00:03:52.061 TEST_HEADER include/spdk/tree.h 00:03:52.061 TEST_HEADER include/spdk/ublk.h 00:03:52.061 TEST_HEADER include/spdk/util.h 00:03:52.061 TEST_HEADER include/spdk/uuid.h 00:03:52.061 TEST_HEADER include/spdk/version.h 00:03:52.061 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:52.061 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:52.061 TEST_HEADER include/spdk/vhost.h 00:03:52.061 TEST_HEADER include/spdk/vmd.h 00:03:52.061 CC app/spdk_dd/spdk_dd.o 00:03:52.061 TEST_HEADER include/spdk/xor.h 00:03:52.061 TEST_HEADER include/spdk/zipf.h 00:03:52.061 CXX test/cpp_headers/accel.o 00:03:52.061 CXX test/cpp_headers/accel_module.o 00:03:52.061 CXX test/cpp_headers/assert.o 00:03:52.061 CXX test/cpp_headers/barrier.o 00:03:52.061 CXX test/cpp_headers/base64.o 00:03:52.061 CXX test/cpp_headers/bdev.o 00:03:52.061 CXX test/cpp_headers/bdev_module.o 00:03:52.061 CXX test/cpp_headers/bdev_zone.o 00:03:52.061 CXX test/cpp_headers/bit_array.o 00:03:52.061 CXX test/cpp_headers/bit_pool.o 00:03:52.061 CXX test/cpp_headers/blob_bdev.o 00:03:52.061 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.061 CXX test/cpp_headers/blobfs.o 00:03:52.061 CXX test/cpp_headers/blob.o 00:03:52.061 CXX test/cpp_headers/conf.o 00:03:52.061 CXX test/cpp_headers/config.o 00:03:52.061 CXX test/cpp_headers/cpuset.o 00:03:52.061 CXX test/cpp_headers/crc16.o 00:03:52.061 CC app/nvmf_tgt/nvmf_main.o 00:03:52.061 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.061 CXX test/cpp_headers/crc32.o 00:03:52.061 CC app/spdk_tgt/spdk_tgt.o 00:03:52.061 CC examples/ioat/verify/verify.o 00:03:52.061 CC examples/ioat/perf/perf.o 00:03:52.061 CC examples/util/zipf/zipf.o 00:03:52.061 CC test/thread/poller_perf/poller_perf.o 00:03:52.061 CC test/app/histogram_perf/histogram_perf.o 00:03:52.061 CC test/env/vtophys/vtophys.o 00:03:52.061 CC app/fio/nvme/fio_plugin.o 00:03:52.061 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:52.061 CC test/app/jsoncat/jsoncat.o 00:03:52.061 CC test/env/memory/memory_ut.o 00:03:52.061 CC test/env/pci/pci_ut.o 00:03:52.061 CC test/app/stub/stub.o 00:03:52.061 CC test/dma/test_dma/test_dma.o 00:03:52.320 CC app/fio/bdev/fio_plugin.o 00:03:52.321 CC test/app/bdev_svc/bdev_svc.o 00:03:52.321 LINK spdk_lspci 00:03:52.321 CC test/env/mem_callbacks/mem_callbacks.o 00:03:52.321 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.321 LINK spdk_nvme_discover 00:03:52.321 LINK rpc_client_test 00:03:52.321 LINK interrupt_tgt 00:03:52.321 LINK jsoncat 00:03:52.321 LINK vtophys 00:03:52.321 LINK poller_perf 00:03:52.582 CXX test/cpp_headers/crc64.o 00:03:52.582 LINK nvmf_tgt 00:03:52.582 LINK histogram_perf 00:03:52.582 CXX test/cpp_headers/dif.o 00:03:52.582 LINK zipf 00:03:52.582 CXX test/cpp_headers/dma.o 00:03:52.582 LINK spdk_trace_record 00:03:52.582 CXX test/cpp_headers/endian.o 00:03:52.583 CXX test/cpp_headers/env_dpdk.o 00:03:52.583 CXX test/cpp_headers/env.o 00:03:52.583 CXX test/cpp_headers/event.o 00:03:52.583 CXX test/cpp_headers/fd_group.o 00:03:52.583 LINK env_dpdk_post_init 00:03:52.583 CXX test/cpp_headers/fd.o 00:03:52.583 CXX test/cpp_headers/file.o 00:03:52.583 CXX test/cpp_headers/ftl.o 00:03:52.583 LINK iscsi_tgt 00:03:52.583 LINK stub 00:03:52.583 CXX test/cpp_headers/gpt_spec.o 00:03:52.583 LINK spdk_tgt 00:03:52.583 CXX test/cpp_headers/hexlify.o 00:03:52.583 CXX test/cpp_headers/histogram_data.o 00:03:52.583 CXX test/cpp_headers/idxd.o 00:03:52.583 LINK ioat_perf 00:03:52.583 CXX test/cpp_headers/idxd_spec.o 00:03:52.583 LINK verify 00:03:52.583 LINK bdev_svc 00:03:52.583 CXX test/cpp_headers/init.o 00:03:52.583 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.583 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.849 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.849 CXX test/cpp_headers/ioat.o 00:03:52.849 CXX test/cpp_headers/ioat_spec.o 00:03:52.849 CXX test/cpp_headers/iscsi_spec.o 00:03:52.849 CXX test/cpp_headers/json.o 00:03:52.849 CXX test/cpp_headers/jsonrpc.o 00:03:52.849 CXX test/cpp_headers/keyring.o 00:03:52.849 CXX test/cpp_headers/keyring_module.o 00:03:52.849 LINK spdk_dd 00:03:52.849 CXX test/cpp_headers/likely.o 00:03:52.849 CXX test/cpp_headers/log.o 00:03:52.849 CXX test/cpp_headers/lvol.o 00:03:52.849 CXX test/cpp_headers/memory.o 00:03:52.849 CXX test/cpp_headers/mmio.o 00:03:52.849 CXX test/cpp_headers/nbd.o 00:03:52.849 CXX test/cpp_headers/notify.o 00:03:52.849 LINK spdk_trace 00:03:52.849 CXX test/cpp_headers/nvme_intel.o 00:03:52.849 CXX test/cpp_headers/nvme.o 00:03:52.849 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.849 LINK pci_ut 00:03:52.849 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.849 CXX test/cpp_headers/nvme_spec.o 00:03:52.849 LINK test_dma 00:03:52.849 CXX test/cpp_headers/nvme_zns.o 00:03:52.849 CXX test/cpp_headers/nvmf_cmd.o 00:03:52.849 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:52.849 CXX test/cpp_headers/nvmf.o 00:03:53.115 CXX test/cpp_headers/nvmf_spec.o 00:03:53.115 CXX test/cpp_headers/nvmf_transport.o 00:03:53.115 CXX test/cpp_headers/opal.o 00:03:53.115 CC test/event/reactor/reactor.o 00:03:53.115 CC test/event/event_perf/event_perf.o 00:03:53.115 CC test/event/reactor_perf/reactor_perf.o 00:03:53.115 LINK nvme_fuzz 00:03:53.115 CXX test/cpp_headers/opal_spec.o 00:03:53.115 CXX test/cpp_headers/pci_ids.o 00:03:53.115 CXX test/cpp_headers/pipe.o 00:03:53.115 CXX test/cpp_headers/queue.o 00:03:53.115 CC test/event/app_repeat/app_repeat.o 00:03:53.115 CXX test/cpp_headers/reduce.o 00:03:53.115 CXX test/cpp_headers/rpc.o 00:03:53.115 CC examples/sock/hello_world/hello_sock.o 00:03:53.115 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.115 LINK spdk_bdev 00:03:53.115 CXX test/cpp_headers/scheduler.o 00:03:53.115 CC test/event/scheduler/scheduler.o 00:03:53.115 CXX test/cpp_headers/scsi.o 00:03:53.115 CC examples/thread/thread/thread_ex.o 00:03:53.115 CC examples/idxd/perf/perf.o 00:03:53.115 CXX test/cpp_headers/scsi_spec.o 00:03:53.373 CC examples/vmd/led/led.o 00:03:53.373 CXX test/cpp_headers/sock.o 00:03:53.373 CXX test/cpp_headers/stdinc.o 00:03:53.373 LINK spdk_nvme 00:03:53.373 CXX test/cpp_headers/string.o 00:03:53.373 CXX test/cpp_headers/thread.o 00:03:53.373 CXX test/cpp_headers/trace.o 00:03:53.373 CXX test/cpp_headers/trace_parser.o 00:03:53.373 CXX test/cpp_headers/tree.o 00:03:53.373 CXX test/cpp_headers/ublk.o 00:03:53.373 CXX test/cpp_headers/util.o 00:03:53.373 CXX test/cpp_headers/uuid.o 00:03:53.373 CXX test/cpp_headers/version.o 00:03:53.373 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.373 LINK reactor 00:03:53.373 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.373 CXX test/cpp_headers/vhost.o 00:03:53.373 LINK event_perf 00:03:53.373 CXX test/cpp_headers/vmd.o 00:03:53.373 CXX test/cpp_headers/xor.o 00:03:53.373 CXX test/cpp_headers/zipf.o 00:03:53.373 LINK reactor_perf 00:03:53.373 LINK mem_callbacks 00:03:53.373 CC app/vhost/vhost.o 00:03:53.373 LINK app_repeat 00:03:53.632 LINK lsvmd 00:03:53.632 LINK vhost_fuzz 00:03:53.632 LINK spdk_nvme_perf 00:03:53.632 LINK led 00:03:53.632 LINK spdk_nvme_identify 00:03:53.632 LINK spdk_top 00:03:53.632 CC test/nvme/sgl/sgl.o 00:03:53.632 CC test/nvme/err_injection/err_injection.o 00:03:53.632 CC test/nvme/reset/reset.o 00:03:53.632 CC test/nvme/reserve/reserve.o 00:03:53.632 CC test/nvme/aer/aer.o 00:03:53.632 CC test/nvme/startup/startup.o 00:03:53.632 CC test/nvme/e2edp/nvme_dp.o 00:03:53.632 CC test/nvme/overhead/overhead.o 00:03:53.632 LINK scheduler 00:03:53.632 CC test/accel/dif/dif.o 00:03:53.632 CC test/nvme/simple_copy/simple_copy.o 00:03:53.632 CC test/blobfs/mkfs/mkfs.o 00:03:53.632 LINK hello_sock 00:03:53.632 LINK thread 00:03:53.632 CC test/nvme/connect_stress/connect_stress.o 00:03:53.891 CC test/nvme/boot_partition/boot_partition.o 00:03:53.891 CC test/nvme/compliance/nvme_compliance.o 00:03:53.891 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.891 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.891 CC test/nvme/fdp/fdp.o 00:03:53.891 CC test/nvme/cuse/cuse.o 00:03:53.891 CC test/lvol/esnap/esnap.o 00:03:53.891 LINK vhost 00:03:53.891 LINK idxd_perf 00:03:53.891 LINK startup 00:03:53.891 LINK err_injection 00:03:53.891 LINK boot_partition 00:03:53.891 LINK reserve 00:03:54.150 LINK simple_copy 00:03:54.150 LINK doorbell_aers 00:03:54.150 LINK reset 00:03:54.151 LINK sgl 00:03:54.151 LINK overhead 00:03:54.151 LINK connect_stress 00:03:54.151 LINK nvme_dp 00:03:54.151 LINK mkfs 00:03:54.151 LINK fused_ordering 00:03:54.151 LINK nvme_compliance 00:03:54.151 CC examples/nvme/hotplug/hotplug.o 00:03:54.151 CC examples/nvme/arbitration/arbitration.o 00:03:54.151 CC examples/nvme/hello_world/hello_world.o 00:03:54.151 LINK aer 00:03:54.151 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.151 CC examples/nvme/abort/abort.o 00:03:54.151 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:54.151 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:54.151 CC examples/nvme/reconnect/reconnect.o 00:03:54.151 LINK fdp 00:03:54.151 LINK memory_ut 00:03:54.409 LINK dif 00:03:54.409 CC examples/accel/perf/accel_perf.o 00:03:54.409 CC examples/blob/cli/blobcli.o 00:03:54.409 CC examples/blob/hello_world/hello_blob.o 00:03:54.409 LINK pmr_persistence 00:03:54.409 LINK cmb_copy 00:03:54.409 LINK hello_world 00:03:54.409 LINK hotplug 00:03:54.666 LINK arbitration 00:03:54.666 LINK reconnect 00:03:54.666 LINK hello_blob 00:03:54.666 LINK abort 00:03:54.666 LINK nvme_manage 00:03:54.666 CC test/bdev/bdevio/bdevio.o 00:03:54.923 LINK accel_perf 00:03:54.923 LINK blobcli 00:03:54.923 LINK iscsi_fuzz 00:03:55.179 LINK bdevio 00:03:55.179 CC examples/bdev/hello_world/hello_bdev.o 00:03:55.179 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.179 LINK cuse 00:03:55.436 LINK hello_bdev 00:03:56.000 LINK bdevperf 00:03:56.258 CC examples/nvmf/nvmf/nvmf.o 00:03:56.516 LINK nvmf 00:03:59.047 LINK esnap 00:03:59.047 00:03:59.047 real 0m40.632s 00:03:59.047 user 7m24.336s 00:03:59.047 sys 1m48.251s 00:03:59.047 18:35:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:59.047 18:35:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.047 ************************************ 00:03:59.047 END TEST make 00:03:59.047 ************************************ 00:03:59.047 18:35:47 -- common/autotest_common.sh@1142 -- $ return 0 00:03:59.047 18:35:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.047 18:35:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.047 18:35:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.047 18:35:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.047 18:35:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.047 18:35:47 -- pm/common@44 -- $ pid=3357110 00:03:59.047 18:35:47 -- pm/common@50 -- $ kill -TERM 3357110 00:03:59.047 18:35:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.047 18:35:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.047 18:35:47 -- pm/common@44 -- $ pid=3357111 00:03:59.047 18:35:47 -- pm/common@50 -- $ kill -TERM 3357111 00:03:59.047 18:35:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.047 18:35:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:59.047 18:35:47 -- pm/common@44 -- $ pid=3357114 00:03:59.047 18:35:47 -- pm/common@50 -- $ kill -TERM 3357114 00:03:59.047 18:35:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.047 18:35:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:59.047 18:35:47 -- pm/common@44 -- $ pid=3357145 00:03:59.047 18:35:47 -- pm/common@50 -- $ sudo -E kill -TERM 3357145 00:03:59.047 18:35:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:59.047 18:35:47 -- nvmf/common.sh@7 -- # uname -s 00:03:59.047 18:35:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.047 18:35:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.047 18:35:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.047 18:35:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.047 18:35:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.047 18:35:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.047 18:35:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.047 18:35:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.047 18:35:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.047 18:35:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.047 18:35:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.047 18:35:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:59.047 18:35:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.047 18:35:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.047 18:35:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:59.047 18:35:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.047 18:35:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:59.047 18:35:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.047 18:35:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.047 18:35:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.047 18:35:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.047 18:35:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.047 18:35:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.047 18:35:47 -- paths/export.sh@5 -- # export PATH 00:03:59.047 18:35:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.047 18:35:47 -- nvmf/common.sh@47 -- # : 0 00:03:59.047 18:35:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:59.047 18:35:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:59.047 18:35:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.047 18:35:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.047 18:35:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.047 18:35:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:59.047 18:35:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:59.047 18:35:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:59.047 18:35:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.047 18:35:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.048 18:35:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.048 18:35:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.048 18:35:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.048 18:35:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.048 18:35:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:59.048 18:35:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.048 18:35:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.048 18:35:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.048 18:35:47 -- spdk/autotest.sh@48 -- # udevadm_pid=3433392 00:03:59.048 18:35:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.048 18:35:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.048 18:35:47 -- pm/common@17 -- # local monitor 00:03:59.048 18:35:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.048 18:35:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.048 18:35:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.048 18:35:47 -- pm/common@21 -- # date +%s 00:03:59.048 18:35:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.048 18:35:47 -- pm/common@21 -- # date +%s 00:03:59.048 18:35:47 -- pm/common@25 -- # sleep 1 00:03:59.048 18:35:47 -- pm/common@21 -- # date +%s 00:03:59.048 18:35:47 -- pm/common@21 -- # date +%s 00:03:59.048 18:35:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720974947 00:03:59.048 18:35:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720974947 00:03:59.048 18:35:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720974947 00:03:59.048 18:35:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720974947 00:03:59.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720974947_collect-vmstat.pm.log 00:03:59.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720974947_collect-cpu-load.pm.log 00:03:59.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720974947_collect-cpu-temp.pm.log 00:03:59.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720974947_collect-bmc-pm.bmc.pm.log 00:03:59.983 18:35:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:59.983 18:35:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:59.983 18:35:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.983 18:35:48 -- common/autotest_common.sh@10 -- # set +x 00:03:59.983 18:35:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:59.983 18:35:48 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:59.983 18:35:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.241 18:35:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:00.241 18:35:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.241 18:35:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.241 18:35:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:00.241 18:35:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:00.241 18:35:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.241 18:35:48 -- common/autotest_common.sh@1455 -- # uname 00:04:00.241 18:35:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:00.241 18:35:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.241 18:35:48 -- common/autotest_common.sh@1475 -- # uname 00:04:00.241 18:35:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:00.241 18:35:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:00.241 18:35:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:00.241 18:35:48 -- spdk/autotest.sh@72 -- # hash lcov 00:04:00.241 18:35:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:00.241 18:35:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:00.241 --rc lcov_branch_coverage=1 00:04:00.241 --rc lcov_function_coverage=1 00:04:00.241 --rc genhtml_branch_coverage=1 00:04:00.241 --rc genhtml_function_coverage=1 00:04:00.241 --rc genhtml_legend=1 00:04:00.241 --rc geninfo_all_blocks=1 00:04:00.241 ' 00:04:00.241 18:35:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:00.241 --rc lcov_branch_coverage=1 00:04:00.241 --rc lcov_function_coverage=1 00:04:00.241 --rc genhtml_branch_coverage=1 00:04:00.241 --rc genhtml_function_coverage=1 00:04:00.241 --rc genhtml_legend=1 00:04:00.241 --rc geninfo_all_blocks=1 00:04:00.241 ' 00:04:00.241 18:35:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:00.241 --rc lcov_branch_coverage=1 00:04:00.241 --rc lcov_function_coverage=1 00:04:00.241 --rc genhtml_branch_coverage=1 00:04:00.241 --rc genhtml_function_coverage=1 00:04:00.241 --rc genhtml_legend=1 00:04:00.241 --rc geninfo_all_blocks=1 00:04:00.241 --no-external' 00:04:00.241 18:35:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:00.241 --rc lcov_branch_coverage=1 00:04:00.241 --rc lcov_function_coverage=1 00:04:00.241 --rc genhtml_branch_coverage=1 00:04:00.241 --rc genhtml_function_coverage=1 00:04:00.241 --rc genhtml_legend=1 00:04:00.241 --rc geninfo_all_blocks=1 00:04:00.241 --no-external' 00:04:00.241 18:35:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:00.241 lcov: LCOV version 1.14 00:04:00.241 18:35:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:05.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:05.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:05.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:05.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:05.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:05.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:05.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:05.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:05.497 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:05.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:05.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:05.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:05.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:05.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:06.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:06.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:27.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.956 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:34.504 18:36:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:34.504 18:36:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.504 18:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:34.504 18:36:22 -- spdk/autotest.sh@91 -- # rm -f 00:04:34.504 18:36:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.438 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:35.438 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:35.438 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:35.438 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:35.438 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:35.438 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:35.438 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:35.438 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:35.438 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:35.438 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:35.438 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:35.438 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:35.438 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:35.438 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:35.438 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:35.438 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:35.438 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:35.438 18:36:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:35.438 18:36:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:35.438 18:36:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:35.438 18:36:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:35.438 18:36:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:35.438 18:36:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:35.438 18:36:23 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:35.438 18:36:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.438 18:36:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:35.438 18:36:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:35.438 18:36:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.438 18:36:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:35.438 18:36:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:35.438 18:36:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:35.438 18:36:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.696 No valid GPT data, bailing 00:04:35.696 18:36:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.696 18:36:23 -- scripts/common.sh@391 -- # pt= 00:04:35.696 18:36:23 -- scripts/common.sh@392 -- # return 1 00:04:35.696 18:36:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.696 1+0 records in 00:04:35.696 1+0 records out 00:04:35.696 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230563 s, 455 MB/s 00:04:35.696 18:36:23 -- spdk/autotest.sh@118 -- # sync 00:04:35.696 18:36:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.696 18:36:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.696 18:36:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.593 18:36:25 -- spdk/autotest.sh@124 -- # uname -s 00:04:37.593 18:36:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:37.593 18:36:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:37.594 18:36:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.594 18:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.594 18:36:25 -- common/autotest_common.sh@10 -- # set +x 00:04:37.594 ************************************ 00:04:37.594 START TEST setup.sh 00:04:37.594 ************************************ 00:04:37.594 18:36:25 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:37.594 * Looking for test storage... 00:04:37.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.594 18:36:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:37.594 18:36:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.594 18:36:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.594 18:36:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.594 18:36:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.594 18:36:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.594 ************************************ 00:04:37.594 START TEST acl 00:04:37.594 ************************************ 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:37.594 * Looking for test storage... 00:04:37.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.594 18:36:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.594 18:36:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:37.594 18:36:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.594 18:36:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.969 18:36:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.969 18:36:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:38.969 18:36:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.969 18:36:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:38.969 18:36:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.969 18:36:27 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:40.343 Hugepages 00:04:40.343 node hugesize free / total 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 00:04:40.343 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:40.343 18:36:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:40.343 18:36:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.343 18:36:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.343 18:36:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:40.343 ************************************ 00:04:40.343 START TEST denied 00:04:40.343 ************************************ 00:04:40.343 18:36:28 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:40.343 18:36:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:40.343 18:36:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:40.343 18:36:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:40.343 18:36:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.343 18:36:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.718 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.718 18:36:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.250 00:04:44.250 real 0m3.886s 00:04:44.250 user 0m1.156s 00:04:44.250 sys 0m1.832s 00:04:44.250 18:36:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.250 18:36:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:44.250 ************************************ 00:04:44.250 END TEST denied 00:04:44.250 ************************************ 00:04:44.250 18:36:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:44.250 18:36:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:44.250 18:36:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.250 18:36:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.250 18:36:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:44.250 ************************************ 00:04:44.250 START TEST allowed 00:04:44.250 ************************************ 00:04:44.250 18:36:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:44.250 18:36:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:44.250 18:36:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:44.251 18:36:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:44.251 18:36:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.251 18:36:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.780 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.780 18:36:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:46.780 18:36:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:46.780 18:36:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:46.780 18:36:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.780 18:36:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.150 00:04:48.150 real 0m3.928s 00:04:48.150 user 0m1.019s 00:04:48.150 sys 0m1.699s 00:04:48.150 18:36:36 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.150 18:36:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:48.150 ************************************ 00:04:48.150 END TEST allowed 00:04:48.150 ************************************ 00:04:48.150 18:36:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:48.150 00:04:48.150 real 0m10.581s 00:04:48.150 user 0m3.290s 00:04:48.150 sys 0m5.243s 00:04:48.150 18:36:36 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.150 18:36:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:48.150 ************************************ 00:04:48.150 END TEST acl 00:04:48.150 ************************************ 00:04:48.150 18:36:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.150 18:36:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:48.150 18:36:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.150 18:36:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.150 18:36:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.150 ************************************ 00:04:48.150 START TEST hugepages 00:04:48.150 ************************************ 00:04:48.150 18:36:36 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:48.151 * Looking for test storage... 00:04:48.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 41229468 kB' 'MemAvailable: 44718772 kB' 'Buffers: 2704 kB' 'Cached: 12775824 kB' 'SwapCached: 0 kB' 'Active: 9737744 kB' 'Inactive: 3492384 kB' 'Active(anon): 9345864 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454880 kB' 'Mapped: 186116 kB' 'Shmem: 8894264 kB' 'KReclaimable: 197756 kB' 'Slab: 559408 kB' 'SReclaimable: 197756 kB' 'SUnreclaim: 361652 kB' 'KernelStack: 12672 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 10455380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.151 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:48.152 18:36:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:48.152 18:36:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.152 18:36:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.152 18:36:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.152 ************************************ 00:04:48.152 START TEST default_setup 00:04:48.152 ************************************ 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.152 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.153 18:36:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.545 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.545 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:49.545 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.487 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43342252 kB' 'MemAvailable: 46831536 kB' 'Buffers: 2704 kB' 'Cached: 12775920 kB' 'SwapCached: 0 kB' 'Active: 9756736 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364856 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473800 kB' 'Mapped: 186248 kB' 'Shmem: 8894360 kB' 'KReclaimable: 197716 kB' 'Slab: 558952 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361236 kB' 'KernelStack: 12608 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.487 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.488 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43346572 kB' 'MemAvailable: 46835856 kB' 'Buffers: 2704 kB' 'Cached: 12775920 kB' 'SwapCached: 0 kB' 'Active: 9756236 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364356 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473292 kB' 'Mapped: 186184 kB' 'Shmem: 8894360 kB' 'KReclaimable: 197716 kB' 'Slab: 558952 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361236 kB' 'KernelStack: 12640 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.489 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.490 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43347196 kB' 'MemAvailable: 46836480 kB' 'Buffers: 2704 kB' 'Cached: 12775940 kB' 'SwapCached: 0 kB' 'Active: 9755964 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364084 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472940 kB' 'Mapped: 186164 kB' 'Shmem: 8894380 kB' 'KReclaimable: 197716 kB' 'Slab: 559104 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361388 kB' 'KernelStack: 12656 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.491 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.755 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.756 nr_hugepages=1024 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.756 resv_hugepages=0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.756 surplus_hugepages=0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.756 anon_hugepages=0 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43347292 kB' 'MemAvailable: 46836576 kB' 'Buffers: 2704 kB' 'Cached: 12775960 kB' 'SwapCached: 0 kB' 'Active: 9755960 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364080 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472940 kB' 'Mapped: 186164 kB' 'Shmem: 8894400 kB' 'KReclaimable: 197716 kB' 'Slab: 559104 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361388 kB' 'KernelStack: 12656 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.756 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.757 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19594764 kB' 'MemUsed: 13282176 kB' 'SwapCached: 0 kB' 'Active: 6760596 kB' 'Inactive: 3264848 kB' 'Active(anon): 6572024 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750160 kB' 'Mapped: 104652 kB' 'AnonPages: 278396 kB' 'Shmem: 6296740 kB' 'KernelStack: 7896 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320796 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.758 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:50.759 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.760 node0=1024 expecting 1024 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.760 00:04:50.760 real 0m2.422s 00:04:50.760 user 0m0.644s 00:04:50.760 sys 0m0.912s 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.760 18:36:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:50.760 ************************************ 00:04:50.760 END TEST default_setup 00:04:50.760 ************************************ 00:04:50.760 18:36:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:50.760 18:36:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:50.760 18:36:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.760 18:36:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.760 18:36:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:50.760 ************************************ 00:04:50.760 START TEST per_node_1G_alloc 00:04:50.760 ************************************ 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.760 18:36:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.697 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.697 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.697 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.697 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.697 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.697 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.697 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.697 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.697 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.697 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.697 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.697 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.697 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.697 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.697 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.697 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.697 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43351728 kB' 'MemAvailable: 46841012 kB' 'Buffers: 2704 kB' 'Cached: 12776040 kB' 'SwapCached: 0 kB' 'Active: 9756540 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364660 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473608 kB' 'Mapped: 186304 kB' 'Shmem: 8894480 kB' 'KReclaimable: 197716 kB' 'Slab: 559064 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361348 kB' 'KernelStack: 12640 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.964 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.965 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43352132 kB' 'MemAvailable: 46841416 kB' 'Buffers: 2704 kB' 'Cached: 12776044 kB' 'SwapCached: 0 kB' 'Active: 9756412 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364532 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473408 kB' 'Mapped: 186292 kB' 'Shmem: 8894484 kB' 'KReclaimable: 197716 kB' 'Slab: 559064 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361348 kB' 'KernelStack: 12656 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.966 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.967 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43352520 kB' 'MemAvailable: 46841804 kB' 'Buffers: 2704 kB' 'Cached: 12776080 kB' 'SwapCached: 0 kB' 'Active: 9755992 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364112 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472908 kB' 'Mapped: 186180 kB' 'Shmem: 8894520 kB' 'KReclaimable: 197716 kB' 'Slab: 559004 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361288 kB' 'KernelStack: 12656 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.968 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.969 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.970 nr_hugepages=1024 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.970 resv_hugepages=0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.970 surplus_hugepages=0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.970 anon_hugepages=0 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43353200 kB' 'MemAvailable: 46842484 kB' 'Buffers: 2704 kB' 'Cached: 12776084 kB' 'SwapCached: 0 kB' 'Active: 9756320 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364440 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473256 kB' 'Mapped: 186180 kB' 'Shmem: 8894524 kB' 'KReclaimable: 197716 kB' 'Slab: 559004 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361288 kB' 'KernelStack: 12656 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10476704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.970 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.971 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20640380 kB' 'MemUsed: 12236560 kB' 'SwapCached: 0 kB' 'Active: 6760872 kB' 'Inactive: 3264848 kB' 'Active(anon): 6572300 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750220 kB' 'Mapped: 104668 kB' 'AnonPages: 278720 kB' 'Shmem: 6296800 kB' 'KernelStack: 7880 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320832 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.972 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.973 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22713292 kB' 'MemUsed: 4951460 kB' 'SwapCached: 0 kB' 'Active: 2995424 kB' 'Inactive: 227536 kB' 'Active(anon): 2792116 kB' 'Inactive(anon): 0 kB' 'Active(file): 203308 kB' 'Inactive(file): 227536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3028584 kB' 'Mapped: 81512 kB' 'AnonPages: 194492 kB' 'Shmem: 2597740 kB' 'KernelStack: 4760 kB' 'PageTables: 2780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73692 kB' 'Slab: 238172 kB' 'SReclaimable: 73692 kB' 'SUnreclaim: 164480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.974 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.975 node0=512 expecting 512 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.975 node1=512 expecting 512 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.975 00:04:51.975 real 0m1.353s 00:04:51.975 user 0m0.533s 00:04:51.975 sys 0m0.757s 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.975 18:36:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.975 ************************************ 00:04:51.975 END TEST per_node_1G_alloc 00:04:51.975 ************************************ 00:04:52.235 18:36:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.235 18:36:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.235 18:36:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.235 18:36:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.235 18:36:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.235 ************************************ 00:04:52.235 START TEST even_2G_alloc 00:04:52.235 ************************************ 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.235 18:36:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.173 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.173 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.173 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.173 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.173 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.173 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.173 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.173 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.173 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.173 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.173 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.173 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.173 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.173 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.173 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.173 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.173 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.438 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43369980 kB' 'MemAvailable: 46859264 kB' 'Buffers: 2704 kB' 'Cached: 12776172 kB' 'SwapCached: 0 kB' 'Active: 9757072 kB' 'Inactive: 3492384 kB' 'Active(anon): 9365192 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473844 kB' 'Mapped: 186328 kB' 'Shmem: 8894612 kB' 'KReclaimable: 197716 kB' 'Slab: 558972 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361256 kB' 'KernelStack: 12704 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10477060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.439 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43369488 kB' 'MemAvailable: 46858772 kB' 'Buffers: 2704 kB' 'Cached: 12776176 kB' 'SwapCached: 0 kB' 'Active: 9756744 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364864 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473484 kB' 'Mapped: 186264 kB' 'Shmem: 8894616 kB' 'KReclaimable: 197716 kB' 'Slab: 558892 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361176 kB' 'KernelStack: 12688 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10477080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.440 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.441 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43369812 kB' 'MemAvailable: 46859096 kB' 'Buffers: 2704 kB' 'Cached: 12776192 kB' 'SwapCached: 0 kB' 'Active: 9756584 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364704 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473284 kB' 'Mapped: 186188 kB' 'Shmem: 8894632 kB' 'KReclaimable: 197716 kB' 'Slab: 558908 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361192 kB' 'KernelStack: 12688 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10477100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.442 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.443 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.444 nr_hugepages=1024 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.444 resv_hugepages=0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.444 surplus_hugepages=0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.444 anon_hugepages=0 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43369812 kB' 'MemAvailable: 46859096 kB' 'Buffers: 2704 kB' 'Cached: 12776212 kB' 'SwapCached: 0 kB' 'Active: 9756568 kB' 'Inactive: 3492384 kB' 'Active(anon): 9364688 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473248 kB' 'Mapped: 186188 kB' 'Shmem: 8894652 kB' 'KReclaimable: 197716 kB' 'Slab: 558908 kB' 'SReclaimable: 197716 kB' 'SUnreclaim: 361192 kB' 'KernelStack: 12672 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10477120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.444 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.445 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20650812 kB' 'MemUsed: 12226128 kB' 'SwapCached: 0 kB' 'Active: 6761412 kB' 'Inactive: 3264848 kB' 'Active(anon): 6572840 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750352 kB' 'Mapped: 104676 kB' 'AnonPages: 279144 kB' 'Shmem: 6296932 kB' 'KernelStack: 7976 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320720 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.446 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22718244 kB' 'MemUsed: 4946508 kB' 'SwapCached: 0 kB' 'Active: 2995208 kB' 'Inactive: 227536 kB' 'Active(anon): 2791900 kB' 'Inactive(anon): 0 kB' 'Active(file): 203308 kB' 'Inactive(file): 227536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3028588 kB' 'Mapped: 81512 kB' 'AnonPages: 194208 kB' 'Shmem: 2597744 kB' 'KernelStack: 4744 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73692 kB' 'Slab: 238188 kB' 'SReclaimable: 73692 kB' 'SUnreclaim: 164496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.447 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.448 node0=512 expecting 512 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:53.448 node1=512 expecting 512 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.448 00:04:53.448 real 0m1.435s 00:04:53.448 user 0m0.615s 00:04:53.448 sys 0m0.782s 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.448 18:36:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:53.448 ************************************ 00:04:53.448 END TEST even_2G_alloc 00:04:53.448 ************************************ 00:04:53.706 18:36:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:53.706 18:36:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:53.706 18:36:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.706 18:36:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.706 18:36:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:53.706 ************************************ 00:04:53.706 START TEST odd_alloc 00:04:53.707 ************************************ 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.707 18:36:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.642 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.643 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.643 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.643 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.643 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.643 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.643 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.643 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.643 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.643 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.643 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.643 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.643 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.643 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.643 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.643 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.643 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43372304 kB' 'MemAvailable: 46861576 kB' 'Buffers: 2704 kB' 'Cached: 12776308 kB' 'SwapCached: 0 kB' 'Active: 9753780 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361900 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470360 kB' 'Mapped: 185208 kB' 'Shmem: 8894748 kB' 'KReclaimable: 197692 kB' 'Slab: 558852 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361160 kB' 'KernelStack: 12688 kB' 'PageTables: 7680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10461856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.906 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43371536 kB' 'MemAvailable: 46860808 kB' 'Buffers: 2704 kB' 'Cached: 12776312 kB' 'SwapCached: 0 kB' 'Active: 9753656 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361776 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470284 kB' 'Mapped: 185252 kB' 'Shmem: 8894752 kB' 'KReclaimable: 197692 kB' 'Slab: 558824 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361132 kB' 'KernelStack: 12704 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10461872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.907 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.908 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43371876 kB' 'MemAvailable: 46861148 kB' 'Buffers: 2704 kB' 'Cached: 12776328 kB' 'SwapCached: 0 kB' 'Active: 9753204 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361324 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469788 kB' 'Mapped: 185252 kB' 'Shmem: 8894768 kB' 'KReclaimable: 197692 kB' 'Slab: 558872 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361180 kB' 'KernelStack: 12656 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10461896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.909 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.910 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:54.911 nr_hugepages=1025 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.911 resv_hugepages=0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.911 surplus_hugepages=0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.911 anon_hugepages=0 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43373800 kB' 'MemAvailable: 46863072 kB' 'Buffers: 2704 kB' 'Cached: 12776344 kB' 'SwapCached: 0 kB' 'Active: 9753188 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361308 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469756 kB' 'Mapped: 185252 kB' 'Shmem: 8894784 kB' 'KReclaimable: 197692 kB' 'Slab: 558872 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361180 kB' 'KernelStack: 12656 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 10461916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.911 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:54.912 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20645708 kB' 'MemUsed: 12231232 kB' 'SwapCached: 0 kB' 'Active: 6759892 kB' 'Inactive: 3264848 kB' 'Active(anon): 6571320 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750472 kB' 'Mapped: 103960 kB' 'AnonPages: 277448 kB' 'Shmem: 6297052 kB' 'KernelStack: 7992 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320632 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.913 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22728148 kB' 'MemUsed: 4936604 kB' 'SwapCached: 0 kB' 'Active: 2993600 kB' 'Inactive: 227536 kB' 'Active(anon): 2790292 kB' 'Inactive(anon): 0 kB' 'Active(file): 203308 kB' 'Inactive(file): 227536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3028600 kB' 'Mapped: 81292 kB' 'AnonPages: 192576 kB' 'Shmem: 2597756 kB' 'KernelStack: 4680 kB' 'PageTables: 2412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73668 kB' 'Slab: 238240 kB' 'SReclaimable: 73668 kB' 'SUnreclaim: 164572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.914 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:54.915 node0=512 expecting 513 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:54.915 node1=513 expecting 512 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:54.915 00:04:54.915 real 0m1.346s 00:04:54.915 user 0m0.596s 00:04:54.915 sys 0m0.710s 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.915 18:36:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.915 ************************************ 00:04:54.915 END TEST odd_alloc 00:04:54.915 ************************************ 00:04:54.915 18:36:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.915 18:36:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:54.915 18:36:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.915 18:36:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.915 18:36:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.915 ************************************ 00:04:54.915 START TEST custom_alloc 00:04:54.915 ************************************ 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.915 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.916 18:36:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.297 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.297 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.297 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.297 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.297 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.297 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.297 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.297 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.297 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.297 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.297 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.297 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.297 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.297 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.297 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.297 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.297 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.297 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42339940 kB' 'MemAvailable: 45829212 kB' 'Buffers: 2704 kB' 'Cached: 12776436 kB' 'SwapCached: 0 kB' 'Active: 9754000 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362120 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470444 kB' 'Mapped: 185264 kB' 'Shmem: 8894876 kB' 'KReclaimable: 197692 kB' 'Slab: 559212 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361520 kB' 'KernelStack: 12704 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10461984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.298 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42339940 kB' 'MemAvailable: 45829212 kB' 'Buffers: 2704 kB' 'Cached: 12776436 kB' 'SwapCached: 0 kB' 'Active: 9753876 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361996 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470300 kB' 'Mapped: 185344 kB' 'Shmem: 8894876 kB' 'KReclaimable: 197692 kB' 'Slab: 559252 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361560 kB' 'KernelStack: 12704 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10462000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.299 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.300 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42340072 kB' 'MemAvailable: 45829344 kB' 'Buffers: 2704 kB' 'Cached: 12776456 kB' 'SwapCached: 0 kB' 'Active: 9753480 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361600 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469864 kB' 'Mapped: 185268 kB' 'Shmem: 8894896 kB' 'KReclaimable: 197692 kB' 'Slab: 559252 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361560 kB' 'KernelStack: 12720 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10462024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.301 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.302 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:56.303 nr_hugepages=1536 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.303 resv_hugepages=0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.303 surplus_hugepages=0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.303 anon_hugepages=0 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 42340472 kB' 'MemAvailable: 45829744 kB' 'Buffers: 2704 kB' 'Cached: 12776476 kB' 'SwapCached: 0 kB' 'Active: 9753508 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361628 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469876 kB' 'Mapped: 185268 kB' 'Shmem: 8894916 kB' 'KReclaimable: 197692 kB' 'Slab: 559252 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361560 kB' 'KernelStack: 12720 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 10462044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.304 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20647200 kB' 'MemUsed: 12229740 kB' 'SwapCached: 0 kB' 'Active: 6760268 kB' 'Inactive: 3264848 kB' 'Active(anon): 6571696 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750572 kB' 'Mapped: 103976 kB' 'AnonPages: 277676 kB' 'Shmem: 6297152 kB' 'KernelStack: 8024 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320984 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.305 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 21693272 kB' 'MemUsed: 5971480 kB' 'SwapCached: 0 kB' 'Active: 2993408 kB' 'Inactive: 227536 kB' 'Active(anon): 2790100 kB' 'Inactive(anon): 0 kB' 'Active(file): 203308 kB' 'Inactive(file): 227536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3028612 kB' 'Mapped: 81292 kB' 'AnonPages: 192356 kB' 'Shmem: 2597768 kB' 'KernelStack: 4696 kB' 'PageTables: 2364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 73668 kB' 'Slab: 238268 kB' 'SReclaimable: 73668 kB' 'SUnreclaim: 164600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.306 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.307 node0=512 expecting 512 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:56.307 node1=1024 expecting 1024 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:56.307 00:04:56.307 real 0m1.407s 00:04:56.307 user 0m0.606s 00:04:56.307 sys 0m0.758s 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.307 18:36:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.307 ************************************ 00:04:56.307 END TEST custom_alloc 00:04:56.307 ************************************ 00:04:56.307 18:36:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:56.307 18:36:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:56.307 18:36:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.307 18:36:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.307 18:36:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.565 ************************************ 00:04:56.565 START TEST no_shrink_alloc 00:04:56.565 ************************************ 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.565 18:36:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.497 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.497 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.497 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.497 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.497 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.497 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.497 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.497 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.497 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.497 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.497 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.497 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.497 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.497 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.497 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.497 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.497 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.757 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43269884 kB' 'MemAvailable: 46759156 kB' 'Buffers: 2704 kB' 'Cached: 12776560 kB' 'SwapCached: 0 kB' 'Active: 9753636 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361756 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469992 kB' 'Mapped: 185420 kB' 'Shmem: 8895000 kB' 'KReclaimable: 197692 kB' 'Slab: 559032 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361340 kB' 'KernelStack: 12704 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.758 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43273220 kB' 'MemAvailable: 46762492 kB' 'Buffers: 2704 kB' 'Cached: 12776560 kB' 'SwapCached: 0 kB' 'Active: 9754068 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362188 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470412 kB' 'Mapped: 185356 kB' 'Shmem: 8895000 kB' 'KReclaimable: 197692 kB' 'Slab: 559040 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361348 kB' 'KernelStack: 12752 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.759 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.760 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43273552 kB' 'MemAvailable: 46762824 kB' 'Buffers: 2704 kB' 'Cached: 12776584 kB' 'SwapCached: 0 kB' 'Active: 9753796 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361916 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470196 kB' 'Mapped: 185280 kB' 'Shmem: 8895024 kB' 'KReclaimable: 197692 kB' 'Slab: 559052 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361360 kB' 'KernelStack: 12816 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.761 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.762 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.763 nr_hugepages=1024 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.763 resv_hugepages=0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.763 surplus_hugepages=0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.763 anon_hugepages=0 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43275060 kB' 'MemAvailable: 46764332 kB' 'Buffers: 2704 kB' 'Cached: 12776600 kB' 'SwapCached: 0 kB' 'Active: 9753756 kB' 'Inactive: 3492384 kB' 'Active(anon): 9361876 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470112 kB' 'Mapped: 185280 kB' 'Shmem: 8895040 kB' 'KReclaimable: 197692 kB' 'Slab: 559052 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361360 kB' 'KernelStack: 12752 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.763 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.764 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19598720 kB' 'MemUsed: 13278220 kB' 'SwapCached: 0 kB' 'Active: 6762996 kB' 'Inactive: 3264848 kB' 'Active(anon): 6574424 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750640 kB' 'Mapped: 104424 kB' 'AnonPages: 280816 kB' 'Shmem: 6297220 kB' 'KernelStack: 8056 kB' 'PageTables: 5188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320852 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.765 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:57.766 node0=1024 expecting 1024 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.766 18:36:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.145 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.145 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.145 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.145 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.145 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.145 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.146 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.146 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.146 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.146 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.146 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.146 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.146 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.146 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.146 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.146 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.146 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.146 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43285576 kB' 'MemAvailable: 46774848 kB' 'Buffers: 2704 kB' 'Cached: 12776672 kB' 'SwapCached: 0 kB' 'Active: 9754068 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362188 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470340 kB' 'Mapped: 185388 kB' 'Shmem: 8895112 kB' 'KReclaimable: 197692 kB' 'Slab: 558976 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361284 kB' 'KernelStack: 12736 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.146 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43285948 kB' 'MemAvailable: 46775220 kB' 'Buffers: 2704 kB' 'Cached: 12776680 kB' 'SwapCached: 0 kB' 'Active: 9754040 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362160 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470344 kB' 'Mapped: 185288 kB' 'Shmem: 8895120 kB' 'KReclaimable: 197692 kB' 'Slab: 558940 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361248 kB' 'KernelStack: 12752 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.147 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.148 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43285948 kB' 'MemAvailable: 46775220 kB' 'Buffers: 2704 kB' 'Cached: 12776696 kB' 'SwapCached: 0 kB' 'Active: 9754048 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362168 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470324 kB' 'Mapped: 185288 kB' 'Shmem: 8895136 kB' 'KReclaimable: 197692 kB' 'Slab: 558976 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361284 kB' 'KernelStack: 12752 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.149 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.150 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.151 nr_hugepages=1024 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.151 resv_hugepages=0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.151 surplus_hugepages=0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.151 anon_hugepages=0 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43285892 kB' 'MemAvailable: 46775164 kB' 'Buffers: 2704 kB' 'Cached: 12776720 kB' 'SwapCached: 0 kB' 'Active: 9754068 kB' 'Inactive: 3492384 kB' 'Active(anon): 9362188 kB' 'Inactive(anon): 0 kB' 'Active(file): 391880 kB' 'Inactive(file): 3492384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470320 kB' 'Mapped: 185288 kB' 'Shmem: 8895160 kB' 'KReclaimable: 197692 kB' 'Slab: 558976 kB' 'SReclaimable: 197692 kB' 'SUnreclaim: 361284 kB' 'KernelStack: 12752 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 10462752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1652316 kB' 'DirectMap2M: 13996032 kB' 'DirectMap1G: 53477376 kB' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.151 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.152 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19590620 kB' 'MemUsed: 13286320 kB' 'SwapCached: 0 kB' 'Active: 6760612 kB' 'Inactive: 3264848 kB' 'Active(anon): 6572040 kB' 'Inactive(anon): 0 kB' 'Active(file): 188572 kB' 'Inactive(file): 3264848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9750712 kB' 'Mapped: 103996 kB' 'AnonPages: 277980 kB' 'Shmem: 6297292 kB' 'KernelStack: 8040 kB' 'PageTables: 5144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124024 kB' 'Slab: 320824 kB' 'SReclaimable: 124024 kB' 'SUnreclaim: 196800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.153 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.154 node0=1024 expecting 1024 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.154 00:04:59.154 real 0m2.805s 00:04:59.154 user 0m1.179s 00:04:59.154 sys 0m1.545s 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.154 18:36:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.154 ************************************ 00:04:59.154 END TEST no_shrink_alloc 00:04:59.154 ************************************ 00:04:59.154 18:36:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:59.154 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:59.154 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.154 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.154 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.155 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.155 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.155 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.412 18:36:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.412 00:04:59.412 real 0m11.140s 00:04:59.412 user 0m4.322s 00:04:59.412 sys 0m5.711s 00:04:59.412 18:36:47 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.412 18:36:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 ************************************ 00:04:59.412 END TEST hugepages 00:04:59.412 ************************************ 00:04:59.412 18:36:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:59.412 18:36:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:59.412 18:36:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.412 18:36:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.412 18:36:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.412 ************************************ 00:04:59.412 START TEST driver 00:04:59.412 ************************************ 00:04:59.412 18:36:47 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:59.412 * Looking for test storage... 00:04:59.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:59.412 18:36:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:59.412 18:36:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.412 18:36:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.942 18:36:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:01.942 18:36:49 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.942 18:36:49 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.942 18:36:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:01.942 ************************************ 00:05:01.942 START TEST guess_driver 00:05:01.942 ************************************ 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:01.942 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:01.943 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:01.943 Looking for driver=vfio-pci 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.943 18:36:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:02.878 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.137 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.137 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:03.137 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.075 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.075 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.075 18:36:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.075 18:36:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:04.075 18:36:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:04.075 18:36:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.075 18:36:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.634 00:05:06.634 real 0m4.597s 00:05:06.634 user 0m1.032s 00:05:06.634 sys 0m1.680s 00:05:06.634 18:36:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.634 18:36:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.634 ************************************ 00:05:06.634 END TEST guess_driver 00:05:06.634 ************************************ 00:05:06.634 18:36:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:06.634 00:05:06.634 real 0m7.100s 00:05:06.634 user 0m1.606s 00:05:06.634 sys 0m2.633s 00:05:06.634 18:36:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.634 18:36:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:06.634 ************************************ 00:05:06.634 END TEST driver 00:05:06.634 ************************************ 00:05:06.634 18:36:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:06.634 18:36:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:06.634 18:36:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.634 18:36:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.634 18:36:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:06.634 ************************************ 00:05:06.634 START TEST devices 00:05:06.634 ************************************ 00:05:06.634 18:36:54 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:06.634 * Looking for test storage... 00:05:06.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:06.634 18:36:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:06.634 18:36:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:06.634 18:36:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.634 18:36:54 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:08.011 18:36:56 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:08.011 No valid GPT data, bailing 00:05:08.011 18:36:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:08.011 18:36:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:08.011 18:36:56 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.011 18:36:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.011 ************************************ 00:05:08.011 START TEST nvme_mount 00:05:08.011 ************************************ 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.011 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.012 18:36:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:08.947 Creating new GPT entries in memory. 00:05:08.947 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.947 other utilities. 00:05:08.947 18:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.947 18:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.947 18:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.947 18:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.947 18:36:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:10.324 Creating new GPT entries in memory. 00:05:10.324 The operation has completed successfully. 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3454124 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.324 18:36:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:11.259 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.259 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.517 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:11.517 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:11.517 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:11.517 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:11.517 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:11.517 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:11.517 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.517 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:11.517 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.775 18:36:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.708 18:37:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.082 18:37:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.082 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.082 00:05:14.082 real 0m6.045s 00:05:14.082 user 0m1.354s 00:05:14.082 sys 0m2.254s 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.082 18:37:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:14.082 ************************************ 00:05:14.082 END TEST nvme_mount 00:05:14.082 ************************************ 00:05:14.082 18:37:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:14.082 18:37:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:14.082 18:37:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.082 18:37:02 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.082 18:37:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:14.082 ************************************ 00:05:14.082 START TEST dm_mount 00:05:14.082 ************************************ 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.082 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:14.083 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.083 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:14.083 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.083 18:37:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:15.016 Creating new GPT entries in memory. 00:05:15.016 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:15.016 other utilities. 00:05:15.016 18:37:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:15.016 18:37:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.016 18:37:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.016 18:37:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.017 18:37:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:16.386 Creating new GPT entries in memory. 00:05:16.387 The operation has completed successfully. 00:05:16.387 18:37:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:16.387 18:37:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.387 18:37:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.387 18:37:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.387 18:37:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:17.323 The operation has completed successfully. 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3456388 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.323 18:37:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:18.258 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.516 18:37:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.451 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:19.710 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:19.710 00:05:19.710 real 0m5.567s 00:05:19.710 user 0m0.909s 00:05:19.710 sys 0m1.499s 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.710 18:37:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:19.710 ************************************ 00:05:19.710 END TEST dm_mount 00:05:19.710 ************************************ 00:05:19.710 18:37:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.710 18:37:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:19.970 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:19.970 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:19.970 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:19.970 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:19.970 18:37:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:19.970 00:05:19.970 real 0m13.519s 00:05:19.970 user 0m2.931s 00:05:19.970 sys 0m4.743s 00:05:19.970 18:37:08 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.970 18:37:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.970 ************************************ 00:05:19.970 END TEST devices 00:05:19.970 ************************************ 00:05:19.970 18:37:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:19.970 00:05:19.970 real 0m42.573s 00:05:19.970 user 0m12.241s 00:05:19.970 sys 0m18.487s 00:05:19.970 18:37:08 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.970 18:37:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.970 ************************************ 00:05:19.970 END TEST setup.sh 00:05:19.970 ************************************ 00:05:19.970 18:37:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.970 18:37:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:21.349 Hugepages 00:05:21.349 node hugesize free / total 00:05:21.349 node0 1048576kB 0 / 0 00:05:21.349 node0 2048kB 2048 / 2048 00:05:21.349 node1 1048576kB 0 / 0 00:05:21.349 node1 2048kB 0 / 0 00:05:21.349 00:05:21.349 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.349 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:21.349 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:21.349 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:21.349 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:21.349 18:37:09 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.349 18:37:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.349 18:37:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.349 18:37:09 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.282 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.282 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.539 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.539 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.476 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.476 18:37:11 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:24.851 18:37:12 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:24.851 18:37:12 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:24.851 18:37:12 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.851 18:37:12 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:24.851 18:37:12 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.851 18:37:12 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.851 18:37:12 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.851 18:37:12 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.851 18:37:12 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.851 18:37:12 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.851 18:37:12 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:24.851 18:37:12 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:25.785 Waiting for block devices as requested 00:05:25.785 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:25.785 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:25.785 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:26.042 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:26.042 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:26.042 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.042 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:26.300 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:26.300 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:26.300 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:26.300 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:26.558 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:26.558 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:26.558 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:26.558 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:26.823 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:26.823 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:26.823 18:37:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:26.823 18:37:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:26.823 18:37:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:26.823 18:37:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:26.823 18:37:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:26.823 18:37:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:26.823 18:37:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:26.823 18:37:15 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:26.823 18:37:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:26.823 18:37:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:26.823 18:37:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:27.117 18:37:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.117 18:37:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.117 18:37:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.117 18:37:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.117 18:37:15 -- common/autotest_common.sh@1557 -- # continue 00:05:27.117 18:37:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:27.117 18:37:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:27.117 18:37:15 -- common/autotest_common.sh@10 -- # set +x 00:05:27.117 18:37:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:27.117 18:37:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.117 18:37:15 -- common/autotest_common.sh@10 -- # set +x 00:05:27.117 18:37:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.055 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.055 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:28.055 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:28.055 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:28.055 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:28.055 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:28.314 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:28.314 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:28.314 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:28.314 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.252 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.252 18:37:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:29.252 18:37:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.252 18:37:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.252 18:37:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:29.252 18:37:17 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:29.252 18:37:17 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.252 18:37:17 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:29.252 18:37:17 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:29.252 18:37:17 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:29.252 18:37:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:29.252 18:37:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:29.252 18:37:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.252 18:37:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.252 18:37:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:29.252 18:37:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:29.252 18:37:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:29.252 18:37:17 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.252 18:37:17 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:29.252 18:37:17 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:29.252 18:37:17 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.252 18:37:17 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:29.252 18:37:17 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:29.252 18:37:17 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:29.253 18:37:17 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3461675 00:05:29.253 18:37:17 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.253 18:37:17 -- common/autotest_common.sh@1598 -- # waitforlisten 3461675 00:05:29.253 18:37:17 -- common/autotest_common.sh@829 -- # '[' -z 3461675 ']' 00:05:29.253 18:37:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.253 18:37:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.253 18:37:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.253 18:37:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.253 18:37:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.513 [2024-07-14 18:37:17.506207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:29.513 [2024-07-14 18:37:17.506294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461675 ] 00:05:29.513 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.513 [2024-07-14 18:37:17.568765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.513 [2024-07-14 18:37:17.655953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.775 18:37:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.775 18:37:17 -- common/autotest_common.sh@862 -- # return 0 00:05:29.775 18:37:17 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:29.775 18:37:17 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:29.775 18:37:17 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:33.069 nvme0n1 00:05:33.069 18:37:20 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.069 [2024-07-14 18:37:21.232027] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:33.069 [2024-07-14 18:37:21.232069] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:33.069 request: 00:05:33.069 { 00:05:33.069 "nvme_ctrlr_name": "nvme0", 00:05:33.069 "password": "test", 00:05:33.069 "method": "bdev_nvme_opal_revert", 00:05:33.069 "req_id": 1 00:05:33.069 } 00:05:33.069 Got JSON-RPC error response 00:05:33.069 response: 00:05:33.069 { 00:05:33.069 "code": -32603, 00:05:33.069 "message": "Internal error" 00:05:33.069 } 00:05:33.069 18:37:21 -- common/autotest_common.sh@1604 -- # true 00:05:33.069 18:37:21 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:33.069 18:37:21 -- common/autotest_common.sh@1608 -- # killprocess 3461675 00:05:33.069 18:37:21 -- common/autotest_common.sh@948 -- # '[' -z 3461675 ']' 00:05:33.069 18:37:21 -- common/autotest_common.sh@952 -- # kill -0 3461675 00:05:33.069 18:37:21 -- common/autotest_common.sh@953 -- # uname 00:05:33.069 18:37:21 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.069 18:37:21 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3461675 00:05:33.069 18:37:21 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.069 18:37:21 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.069 18:37:21 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3461675' 00:05:33.069 killing process with pid 3461675 00:05:33.069 18:37:21 -- common/autotest_common.sh@967 -- # kill 3461675 00:05:33.069 18:37:21 -- common/autotest_common.sh@972 -- # wait 3461675 00:05:34.973 18:37:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:34.973 18:37:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:34.973 18:37:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.973 18:37:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:34.973 18:37:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:34.973 18:37:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:34.973 18:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:34.973 18:37:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:34.973 18:37:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.973 18:37:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.973 18:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.973 18:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:34.973 ************************************ 00:05:34.973 START TEST env 00:05:34.973 ************************************ 00:05:34.973 18:37:23 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:34.973 * Looking for test storage... 00:05:34.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:34.973 18:37:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.973 18:37:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.973 18:37:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.973 18:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.973 ************************************ 00:05:34.973 START TEST env_memory 00:05:34.973 ************************************ 00:05:34.973 18:37:23 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.973 00:05:34.973 00:05:34.973 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.973 http://cunit.sourceforge.net/ 00:05:34.973 00:05:34.973 00:05:34.973 Suite: memory 00:05:34.973 Test: alloc and free memory map ...[2024-07-14 18:37:23.150291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.973 passed 00:05:34.973 Test: mem map translation ...[2024-07-14 18:37:23.169910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.973 [2024-07-14 18:37:23.169931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.973 [2024-07-14 18:37:23.169981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.973 [2024-07-14 18:37:23.169993] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.232 passed 00:05:35.232 Test: mem map registration ...[2024-07-14 18:37:23.210944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:35.232 [2024-07-14 18:37:23.210968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:35.232 passed 00:05:35.232 Test: mem map adjacent registrations ...passed 00:05:35.232 00:05:35.232 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.232 suites 1 1 n/a 0 0 00:05:35.232 tests 4 4 4 0 0 00:05:35.232 asserts 152 152 152 0 n/a 00:05:35.232 00:05:35.232 Elapsed time = 0.140 seconds 00:05:35.232 00:05:35.232 real 0m0.148s 00:05:35.232 user 0m0.140s 00:05:35.232 sys 0m0.008s 00:05:35.232 18:37:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.232 18:37:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.232 ************************************ 00:05:35.232 END TEST env_memory 00:05:35.232 ************************************ 00:05:35.232 18:37:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.232 18:37:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.232 18:37:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.232 18:37:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.232 18:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.232 ************************************ 00:05:35.232 START TEST env_vtophys 00:05:35.232 ************************************ 00:05:35.232 18:37:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.232 EAL: lib.eal log level changed from notice to debug 00:05:35.232 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.232 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.232 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.232 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.232 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.232 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.232 EAL: Detected lcore 6 as core 8 on socket 0 00:05:35.232 EAL: Detected lcore 7 as core 9 on socket 0 00:05:35.232 EAL: Detected lcore 8 as core 10 on socket 0 00:05:35.232 EAL: Detected lcore 9 as core 11 on socket 0 00:05:35.232 EAL: Detected lcore 10 as core 12 on socket 0 00:05:35.232 EAL: Detected lcore 11 as core 13 on socket 0 00:05:35.232 EAL: Detected lcore 12 as core 0 on socket 1 00:05:35.232 EAL: Detected lcore 13 as core 1 on socket 1 00:05:35.232 EAL: Detected lcore 14 as core 2 on socket 1 00:05:35.232 EAL: Detected lcore 15 as core 3 on socket 1 00:05:35.232 EAL: Detected lcore 16 as core 4 on socket 1 00:05:35.232 EAL: Detected lcore 17 as core 5 on socket 1 00:05:35.232 EAL: Detected lcore 18 as core 8 on socket 1 00:05:35.232 EAL: Detected lcore 19 as core 9 on socket 1 00:05:35.232 EAL: Detected lcore 20 as core 10 on socket 1 00:05:35.232 EAL: Detected lcore 21 as core 11 on socket 1 00:05:35.232 EAL: Detected lcore 22 as core 12 on socket 1 00:05:35.232 EAL: Detected lcore 23 as core 13 on socket 1 00:05:35.232 EAL: Detected lcore 24 as core 0 on socket 0 00:05:35.232 EAL: Detected lcore 25 as core 1 on socket 0 00:05:35.232 EAL: Detected lcore 26 as core 2 on socket 0 00:05:35.232 EAL: Detected lcore 27 as core 3 on socket 0 00:05:35.232 EAL: Detected lcore 28 as core 4 on socket 0 00:05:35.232 EAL: Detected lcore 29 as core 5 on socket 0 00:05:35.232 EAL: Detected lcore 30 as core 8 on socket 0 00:05:35.232 EAL: Detected lcore 31 as core 9 on socket 0 00:05:35.232 EAL: Detected lcore 32 as core 10 on socket 0 00:05:35.232 EAL: Detected lcore 33 as core 11 on socket 0 00:05:35.232 EAL: Detected lcore 34 as core 12 on socket 0 00:05:35.232 EAL: Detected lcore 35 as core 13 on socket 0 00:05:35.232 EAL: Detected lcore 36 as core 0 on socket 1 00:05:35.232 EAL: Detected lcore 37 as core 1 on socket 1 00:05:35.232 EAL: Detected lcore 38 as core 2 on socket 1 00:05:35.232 EAL: Detected lcore 39 as core 3 on socket 1 00:05:35.232 EAL: Detected lcore 40 as core 4 on socket 1 00:05:35.232 EAL: Detected lcore 41 as core 5 on socket 1 00:05:35.232 EAL: Detected lcore 42 as core 8 on socket 1 00:05:35.232 EAL: Detected lcore 43 as core 9 on socket 1 00:05:35.232 EAL: Detected lcore 44 as core 10 on socket 1 00:05:35.232 EAL: Detected lcore 45 as core 11 on socket 1 00:05:35.232 EAL: Detected lcore 46 as core 12 on socket 1 00:05:35.232 EAL: Detected lcore 47 as core 13 on socket 1 00:05:35.232 EAL: Maximum logical cores by configuration: 128 00:05:35.232 EAL: Detected CPU lcores: 48 00:05:35.232 EAL: Detected NUMA nodes: 2 00:05:35.232 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:35.232 EAL: Detected shared linkage of DPDK 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:35.232 EAL: Registered [vdev] bus. 00:05:35.232 EAL: bus.vdev log level changed from disabled to notice 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:35.232 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:35.232 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:35.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:35.232 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.232 EAL: No shared files mode enabled, IPC is disabled 00:05:35.232 EAL: Bus pci wants IOVA as 'DC' 00:05:35.232 EAL: Bus vdev wants IOVA as 'DC' 00:05:35.232 EAL: Buses did not request a specific IOVA mode. 00:05:35.232 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.232 EAL: Selected IOVA mode 'VA' 00:05:35.232 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.232 EAL: Probing VFIO support... 00:05:35.232 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.232 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.232 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.232 EAL: VFIO support initialized 00:05:35.232 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.232 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.232 EAL: Setting up physically contiguous memory... 00:05:35.232 EAL: Setting maximum number of open files to 524288 00:05:35.232 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.232 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.232 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.232 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.232 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.232 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.232 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.232 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.232 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.232 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.232 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.232 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.232 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.233 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.233 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.233 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.233 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.233 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.233 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.233 EAL: Hugepages will be freed exactly as allocated. 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: TSC frequency is ~2700000 KHz 00:05:35.233 EAL: Main lcore 0 is ready (tid=7f0a39305a00;cpuset=[0]) 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 0 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.233 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.233 00:05:35.233 00:05:35.233 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.233 http://cunit.sourceforge.net/ 00:05:35.233 00:05:35.233 00:05:35.233 Suite: components_suite 00:05:35.233 Test: vtophys_malloc_test ...passed 00:05:35.233 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.233 EAL: Trying to obtain current memory policy. 00:05:35.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.233 EAL: Restoring previous memory policy: 4 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.233 EAL: request: mp_malloc_sync 00:05:35.233 EAL: No shared files mode enabled, IPC is disabled 00:05:35.233 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.492 EAL: request: mp_malloc_sync 00:05:35.492 EAL: No shared files mode enabled, IPC is disabled 00:05:35.492 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.492 EAL: Trying to obtain current memory policy. 00:05:35.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.492 EAL: Restoring previous memory policy: 4 00:05:35.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.492 EAL: request: mp_malloc_sync 00:05:35.492 EAL: No shared files mode enabled, IPC is disabled 00:05:35.492 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.492 EAL: request: mp_malloc_sync 00:05:35.492 EAL: No shared files mode enabled, IPC is disabled 00:05:35.492 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.492 EAL: Trying to obtain current memory policy. 00:05:35.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.492 EAL: Restoring previous memory policy: 4 00:05:35.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.492 EAL: request: mp_malloc_sync 00:05:35.492 EAL: No shared files mode enabled, IPC is disabled 00:05:35.492 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.492 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.769 EAL: request: mp_malloc_sync 00:05:35.769 EAL: No shared files mode enabled, IPC is disabled 00:05:35.769 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.769 EAL: Trying to obtain current memory policy. 00:05:35.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.769 EAL: Restoring previous memory policy: 4 00:05:35.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.769 EAL: request: mp_malloc_sync 00:05:35.769 EAL: No shared files mode enabled, IPC is disabled 00:05:35.769 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.769 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.027 EAL: request: mp_malloc_sync 00:05:36.027 EAL: No shared files mode enabled, IPC is disabled 00:05:36.027 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.027 EAL: Trying to obtain current memory policy. 00:05:36.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.286 EAL: Restoring previous memory policy: 4 00:05:36.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.286 EAL: request: mp_malloc_sync 00:05:36.286 EAL: No shared files mode enabled, IPC is disabled 00:05:36.286 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.803 EAL: request: mp_malloc_sync 00:05:36.803 EAL: No shared files mode enabled, IPC is disabled 00:05:36.803 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.803 passed 00:05:36.803 00:05:36.803 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.803 suites 1 1 n/a 0 0 00:05:36.803 tests 2 2 2 0 0 00:05:36.803 asserts 497 497 497 0 n/a 00:05:36.803 00:05:36.803 Elapsed time = 1.386 seconds 00:05:36.803 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.803 EAL: request: mp_malloc_sync 00:05:36.803 EAL: No shared files mode enabled, IPC is disabled 00:05:36.803 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.803 EAL: No shared files mode enabled, IPC is disabled 00:05:36.803 EAL: No shared files mode enabled, IPC is disabled 00:05:36.803 EAL: No shared files mode enabled, IPC is disabled 00:05:36.803 00:05:36.803 real 0m1.507s 00:05:36.803 user 0m0.876s 00:05:36.803 sys 0m0.594s 00:05:36.803 18:37:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.803 18:37:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 ************************************ 00:05:36.803 END TEST env_vtophys 00:05:36.803 ************************************ 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.803 18:37:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.803 18:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 ************************************ 00:05:36.803 START TEST env_pci 00:05:36.803 ************************************ 00:05:36.803 18:37:24 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.803 00:05:36.803 00:05:36.803 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.803 http://cunit.sourceforge.net/ 00:05:36.803 00:05:36.803 00:05:36.803 Suite: pci 00:05:36.803 Test: pci_hook ...[2024-07-14 18:37:24.876063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3462563 has claimed it 00:05:36.803 EAL: Cannot find device (10000:00:01.0) 00:05:36.803 EAL: Failed to attach device on primary process 00:05:36.803 passed 00:05:36.803 00:05:36.803 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.803 suites 1 1 n/a 0 0 00:05:36.803 tests 1 1 1 0 0 00:05:36.803 asserts 25 25 25 0 n/a 00:05:36.803 00:05:36.803 Elapsed time = 0.019 seconds 00:05:36.803 00:05:36.803 real 0m0.029s 00:05:36.803 user 0m0.004s 00:05:36.803 sys 0m0.025s 00:05:36.803 18:37:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.803 18:37:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 ************************************ 00:05:36.803 END TEST env_pci 00:05:36.803 ************************************ 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:36.803 18:37:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.803 18:37:24 env -- env/env.sh@15 -- # uname 00:05:36.803 18:37:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.803 18:37:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.803 18:37:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:36.803 18:37:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.803 18:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 ************************************ 00:05:36.803 START TEST env_dpdk_post_init 00:05:36.803 ************************************ 00:05:36.803 18:37:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.803 EAL: Detected CPU lcores: 48 00:05:36.803 EAL: Detected NUMA nodes: 2 00:05:36.804 EAL: Detected shared linkage of DPDK 00:05:36.804 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.804 EAL: Selected IOVA mode 'VA' 00:05:36.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.804 EAL: VFIO support initialized 00:05:36.804 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.063 EAL: Using IOMMU type 1 (Type 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:37.063 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:38.000 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:41.289 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:41.289 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:41.289 Starting DPDK initialization... 00:05:41.289 Starting SPDK post initialization... 00:05:41.289 SPDK NVMe probe 00:05:41.289 Attaching to 0000:88:00.0 00:05:41.289 Attached to 0000:88:00.0 00:05:41.289 Cleaning up... 00:05:41.289 00:05:41.289 real 0m4.405s 00:05:41.289 user 0m3.285s 00:05:41.289 sys 0m0.180s 00:05:41.289 18:37:29 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.289 18:37:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.289 ************************************ 00:05:41.289 END TEST env_dpdk_post_init 00:05:41.289 ************************************ 00:05:41.289 18:37:29 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.289 18:37:29 env -- env/env.sh@26 -- # uname 00:05:41.289 18:37:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:41.289 18:37:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.289 18:37:29 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.289 18:37:29 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.289 18:37:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.289 ************************************ 00:05:41.289 START TEST env_mem_callbacks 00:05:41.289 ************************************ 00:05:41.289 18:37:29 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:41.289 EAL: Detected CPU lcores: 48 00:05:41.289 EAL: Detected NUMA nodes: 2 00:05:41.289 EAL: Detected shared linkage of DPDK 00:05:41.289 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.289 EAL: Selected IOVA mode 'VA' 00:05:41.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.289 EAL: VFIO support initialized 00:05:41.289 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.289 00:05:41.289 00:05:41.289 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.289 http://cunit.sourceforge.net/ 00:05:41.289 00:05:41.289 00:05:41.289 Suite: memory 00:05:41.289 Test: test ... 00:05:41.289 register 0x200000200000 2097152 00:05:41.289 malloc 3145728 00:05:41.289 register 0x200000400000 4194304 00:05:41.289 buf 0x200000500000 len 3145728 PASSED 00:05:41.289 malloc 64 00:05:41.289 buf 0x2000004fff40 len 64 PASSED 00:05:41.289 malloc 4194304 00:05:41.289 register 0x200000800000 6291456 00:05:41.289 buf 0x200000a00000 len 4194304 PASSED 00:05:41.289 free 0x200000500000 3145728 00:05:41.289 free 0x2000004fff40 64 00:05:41.289 unregister 0x200000400000 4194304 PASSED 00:05:41.289 free 0x200000a00000 4194304 00:05:41.289 unregister 0x200000800000 6291456 PASSED 00:05:41.289 malloc 8388608 00:05:41.289 register 0x200000400000 10485760 00:05:41.289 buf 0x200000600000 len 8388608 PASSED 00:05:41.289 free 0x200000600000 8388608 00:05:41.289 unregister 0x200000400000 10485760 PASSED 00:05:41.289 passed 00:05:41.289 00:05:41.289 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.289 suites 1 1 n/a 0 0 00:05:41.289 tests 1 1 1 0 0 00:05:41.289 asserts 15 15 15 0 n/a 00:05:41.289 00:05:41.289 Elapsed time = 0.005 seconds 00:05:41.289 00:05:41.289 real 0m0.050s 00:05:41.289 user 0m0.014s 00:05:41.289 sys 0m0.036s 00:05:41.289 18:37:29 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.289 18:37:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.289 ************************************ 00:05:41.289 END TEST env_mem_callbacks 00:05:41.289 ************************************ 00:05:41.289 18:37:29 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.289 00:05:41.289 real 0m6.433s 00:05:41.289 user 0m4.437s 00:05:41.289 sys 0m1.037s 00:05:41.289 18:37:29 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.289 18:37:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.289 ************************************ 00:05:41.289 END TEST env 00:05:41.289 ************************************ 00:05:41.289 18:37:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.289 18:37:29 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.289 18:37:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.289 18:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.289 18:37:29 -- common/autotest_common.sh@10 -- # set +x 00:05:41.547 ************************************ 00:05:41.547 START TEST rpc 00:05:41.547 ************************************ 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.547 * Looking for test storage... 00:05:41.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.547 18:37:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3463227 00:05:41.547 18:37:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.547 18:37:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.547 18:37:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3463227 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@829 -- # '[' -z 3463227 ']' 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.547 18:37:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.547 [2024-07-14 18:37:29.617889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:41.547 [2024-07-14 18:37:29.617977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463227 ] 00:05:41.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.547 [2024-07-14 18:37:29.688195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.807 [2024-07-14 18:37:29.779602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.807 [2024-07-14 18:37:29.779659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3463227' to capture a snapshot of events at runtime. 00:05:41.807 [2024-07-14 18:37:29.779674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.807 [2024-07-14 18:37:29.779687] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.807 [2024-07-14 18:37:29.779698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3463227 for offline analysis/debug. 00:05:41.807 [2024-07-14 18:37:29.779729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.067 18:37:30 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.067 18:37:30 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.067 18:37:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.067 18:37:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:42.067 18:37:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.067 18:37:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.067 18:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.067 18:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.067 18:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 ************************************ 00:05:42.067 START TEST rpc_integrity 00:05:42.067 ************************************ 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.067 { 00:05:42.067 "name": "Malloc0", 00:05:42.067 "aliases": [ 00:05:42.067 "1ded737f-4063-4072-82b4-467ffc7f61df" 00:05:42.067 ], 00:05:42.067 "product_name": "Malloc disk", 00:05:42.067 "block_size": 512, 00:05:42.067 "num_blocks": 16384, 00:05:42.067 "uuid": "1ded737f-4063-4072-82b4-467ffc7f61df", 00:05:42.067 "assigned_rate_limits": { 00:05:42.067 "rw_ios_per_sec": 0, 00:05:42.067 "rw_mbytes_per_sec": 0, 00:05:42.067 "r_mbytes_per_sec": 0, 00:05:42.067 "w_mbytes_per_sec": 0 00:05:42.067 }, 00:05:42.067 "claimed": false, 00:05:42.067 "zoned": false, 00:05:42.067 "supported_io_types": { 00:05:42.067 "read": true, 00:05:42.067 "write": true, 00:05:42.067 "unmap": true, 00:05:42.067 "flush": true, 00:05:42.067 "reset": true, 00:05:42.067 "nvme_admin": false, 00:05:42.067 "nvme_io": false, 00:05:42.067 "nvme_io_md": false, 00:05:42.067 "write_zeroes": true, 00:05:42.067 "zcopy": true, 00:05:42.067 "get_zone_info": false, 00:05:42.067 "zone_management": false, 00:05:42.067 "zone_append": false, 00:05:42.067 "compare": false, 00:05:42.067 "compare_and_write": false, 00:05:42.067 "abort": true, 00:05:42.067 "seek_hole": false, 00:05:42.067 "seek_data": false, 00:05:42.067 "copy": true, 00:05:42.067 "nvme_iov_md": false 00:05:42.067 }, 00:05:42.067 "memory_domains": [ 00:05:42.067 { 00:05:42.067 "dma_device_id": "system", 00:05:42.067 "dma_device_type": 1 00:05:42.067 }, 00:05:42.067 { 00:05:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.067 "dma_device_type": 2 00:05:42.067 } 00:05:42.067 ], 00:05:42.067 "driver_specific": {} 00:05:42.067 } 00:05:42.067 ]' 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 [2024-07-14 18:37:30.176370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.067 [2024-07-14 18:37:30.176413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.067 [2024-07-14 18:37:30.176436] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x180bbb0 00:05:42.067 [2024-07-14 18:37:30.176453] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.067 [2024-07-14 18:37:30.177961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.067 [2024-07-14 18:37:30.177991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.067 Passthru0 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.067 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.067 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.067 { 00:05:42.067 "name": "Malloc0", 00:05:42.067 "aliases": [ 00:05:42.067 "1ded737f-4063-4072-82b4-467ffc7f61df" 00:05:42.067 ], 00:05:42.067 "product_name": "Malloc disk", 00:05:42.067 "block_size": 512, 00:05:42.067 "num_blocks": 16384, 00:05:42.067 "uuid": "1ded737f-4063-4072-82b4-467ffc7f61df", 00:05:42.067 "assigned_rate_limits": { 00:05:42.067 "rw_ios_per_sec": 0, 00:05:42.067 "rw_mbytes_per_sec": 0, 00:05:42.067 "r_mbytes_per_sec": 0, 00:05:42.067 "w_mbytes_per_sec": 0 00:05:42.067 }, 00:05:42.067 "claimed": true, 00:05:42.067 "claim_type": "exclusive_write", 00:05:42.067 "zoned": false, 00:05:42.067 "supported_io_types": { 00:05:42.067 "read": true, 00:05:42.067 "write": true, 00:05:42.067 "unmap": true, 00:05:42.067 "flush": true, 00:05:42.067 "reset": true, 00:05:42.067 "nvme_admin": false, 00:05:42.067 "nvme_io": false, 00:05:42.067 "nvme_io_md": false, 00:05:42.067 "write_zeroes": true, 00:05:42.067 "zcopy": true, 00:05:42.067 "get_zone_info": false, 00:05:42.067 "zone_management": false, 00:05:42.067 "zone_append": false, 00:05:42.067 "compare": false, 00:05:42.067 "compare_and_write": false, 00:05:42.067 "abort": true, 00:05:42.067 "seek_hole": false, 00:05:42.067 "seek_data": false, 00:05:42.067 "copy": true, 00:05:42.067 "nvme_iov_md": false 00:05:42.067 }, 00:05:42.067 "memory_domains": [ 00:05:42.067 { 00:05:42.067 "dma_device_id": "system", 00:05:42.067 "dma_device_type": 1 00:05:42.067 }, 00:05:42.067 { 00:05:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.067 "dma_device_type": 2 00:05:42.067 } 00:05:42.067 ], 00:05:42.067 "driver_specific": {} 00:05:42.067 }, 00:05:42.067 { 00:05:42.067 "name": "Passthru0", 00:05:42.067 "aliases": [ 00:05:42.067 "82b22951-42bd-5cd4-9906-56bc594e9ff5" 00:05:42.067 ], 00:05:42.067 "product_name": "passthru", 00:05:42.067 "block_size": 512, 00:05:42.067 "num_blocks": 16384, 00:05:42.067 "uuid": "82b22951-42bd-5cd4-9906-56bc594e9ff5", 00:05:42.067 "assigned_rate_limits": { 00:05:42.067 "rw_ios_per_sec": 0, 00:05:42.067 "rw_mbytes_per_sec": 0, 00:05:42.067 "r_mbytes_per_sec": 0, 00:05:42.067 "w_mbytes_per_sec": 0 00:05:42.067 }, 00:05:42.067 "claimed": false, 00:05:42.067 "zoned": false, 00:05:42.067 "supported_io_types": { 00:05:42.067 "read": true, 00:05:42.067 "write": true, 00:05:42.067 "unmap": true, 00:05:42.067 "flush": true, 00:05:42.067 "reset": true, 00:05:42.067 "nvme_admin": false, 00:05:42.067 "nvme_io": false, 00:05:42.067 "nvme_io_md": false, 00:05:42.067 "write_zeroes": true, 00:05:42.067 "zcopy": true, 00:05:42.067 "get_zone_info": false, 00:05:42.067 "zone_management": false, 00:05:42.067 "zone_append": false, 00:05:42.067 "compare": false, 00:05:42.067 "compare_and_write": false, 00:05:42.067 "abort": true, 00:05:42.067 "seek_hole": false, 00:05:42.067 "seek_data": false, 00:05:42.067 "copy": true, 00:05:42.067 "nvme_iov_md": false 00:05:42.067 }, 00:05:42.067 "memory_domains": [ 00:05:42.067 { 00:05:42.067 "dma_device_id": "system", 00:05:42.067 "dma_device_type": 1 00:05:42.067 }, 00:05:42.067 { 00:05:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.067 "dma_device_type": 2 00:05:42.067 } 00:05:42.068 ], 00:05:42.068 "driver_specific": { 00:05:42.068 "passthru": { 00:05:42.068 "name": "Passthru0", 00:05:42.068 "base_bdev_name": "Malloc0" 00:05:42.068 } 00:05:42.068 } 00:05:42.068 } 00:05:42.068 ]' 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.068 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.068 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.328 18:37:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.328 00:05:42.328 real 0m0.230s 00:05:42.328 user 0m0.149s 00:05:42.328 sys 0m0.025s 00:05:42.328 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.328 18:37:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.328 ************************************ 00:05:42.328 END TEST rpc_integrity 00:05:42.328 ************************************ 00:05:42.328 18:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.328 18:37:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.328 18:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.328 18:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.328 18:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.328 ************************************ 00:05:42.328 START TEST rpc_plugins 00:05:42.328 ************************************ 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.328 { 00:05:42.328 "name": "Malloc1", 00:05:42.328 "aliases": [ 00:05:42.328 "50aa56fa-123a-432f-980e-441b7cecbcc8" 00:05:42.328 ], 00:05:42.328 "product_name": "Malloc disk", 00:05:42.328 "block_size": 4096, 00:05:42.328 "num_blocks": 256, 00:05:42.328 "uuid": "50aa56fa-123a-432f-980e-441b7cecbcc8", 00:05:42.328 "assigned_rate_limits": { 00:05:42.328 "rw_ios_per_sec": 0, 00:05:42.328 "rw_mbytes_per_sec": 0, 00:05:42.328 "r_mbytes_per_sec": 0, 00:05:42.328 "w_mbytes_per_sec": 0 00:05:42.328 }, 00:05:42.328 "claimed": false, 00:05:42.328 "zoned": false, 00:05:42.328 "supported_io_types": { 00:05:42.328 "read": true, 00:05:42.328 "write": true, 00:05:42.328 "unmap": true, 00:05:42.328 "flush": true, 00:05:42.328 "reset": true, 00:05:42.328 "nvme_admin": false, 00:05:42.328 "nvme_io": false, 00:05:42.328 "nvme_io_md": false, 00:05:42.328 "write_zeroes": true, 00:05:42.328 "zcopy": true, 00:05:42.328 "get_zone_info": false, 00:05:42.328 "zone_management": false, 00:05:42.328 "zone_append": false, 00:05:42.328 "compare": false, 00:05:42.328 "compare_and_write": false, 00:05:42.328 "abort": true, 00:05:42.328 "seek_hole": false, 00:05:42.328 "seek_data": false, 00:05:42.328 "copy": true, 00:05:42.328 "nvme_iov_md": false 00:05:42.328 }, 00:05:42.328 "memory_domains": [ 00:05:42.328 { 00:05:42.328 "dma_device_id": "system", 00:05:42.328 "dma_device_type": 1 00:05:42.328 }, 00:05:42.328 { 00:05:42.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.328 "dma_device_type": 2 00:05:42.328 } 00:05:42.328 ], 00:05:42.328 "driver_specific": {} 00:05:42.328 } 00:05:42.328 ]' 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.328 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.328 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.329 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.329 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.329 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.329 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.329 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.329 18:37:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.329 00:05:42.329 real 0m0.116s 00:05:42.329 user 0m0.074s 00:05:42.329 sys 0m0.014s 00:05:42.329 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.329 18:37:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.329 ************************************ 00:05:42.329 END TEST rpc_plugins 00:05:42.329 ************************************ 00:05:42.329 18:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.329 18:37:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.329 18:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.329 18:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.329 18:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.329 ************************************ 00:05:42.329 START TEST rpc_trace_cmd_test 00:05:42.329 ************************************ 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.329 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3463227", 00:05:42.329 "tpoint_group_mask": "0x8", 00:05:42.329 "iscsi_conn": { 00:05:42.329 "mask": "0x2", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "scsi": { 00:05:42.329 "mask": "0x4", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "bdev": { 00:05:42.329 "mask": "0x8", 00:05:42.329 "tpoint_mask": "0xffffffffffffffff" 00:05:42.329 }, 00:05:42.329 "nvmf_rdma": { 00:05:42.329 "mask": "0x10", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "nvmf_tcp": { 00:05:42.329 "mask": "0x20", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "ftl": { 00:05:42.329 "mask": "0x40", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "blobfs": { 00:05:42.329 "mask": "0x80", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "dsa": { 00:05:42.329 "mask": "0x200", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "thread": { 00:05:42.329 "mask": "0x400", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "nvme_pcie": { 00:05:42.329 "mask": "0x800", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "iaa": { 00:05:42.329 "mask": "0x1000", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "nvme_tcp": { 00:05:42.329 "mask": "0x2000", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "bdev_nvme": { 00:05:42.329 "mask": "0x4000", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 }, 00:05:42.329 "sock": { 00:05:42.329 "mask": "0x8000", 00:05:42.329 "tpoint_mask": "0x0" 00:05:42.329 } 00:05:42.329 }' 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.329 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.588 00:05:42.588 real 0m0.196s 00:05:42.588 user 0m0.169s 00:05:42.588 sys 0m0.021s 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.588 18:37:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.588 ************************************ 00:05:42.588 END TEST rpc_trace_cmd_test 00:05:42.588 ************************************ 00:05:42.588 18:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.588 18:37:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.588 18:37:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.588 18:37:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.588 18:37:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.588 18:37:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.588 18:37:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.588 ************************************ 00:05:42.588 START TEST rpc_daemon_integrity 00:05:42.588 ************************************ 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.588 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.589 { 00:05:42.589 "name": "Malloc2", 00:05:42.589 "aliases": [ 00:05:42.589 "65291984-ef3a-455c-8ed4-4aaed0a394d1" 00:05:42.589 ], 00:05:42.589 "product_name": "Malloc disk", 00:05:42.589 "block_size": 512, 00:05:42.589 "num_blocks": 16384, 00:05:42.589 "uuid": "65291984-ef3a-455c-8ed4-4aaed0a394d1", 00:05:42.589 "assigned_rate_limits": { 00:05:42.589 "rw_ios_per_sec": 0, 00:05:42.589 "rw_mbytes_per_sec": 0, 00:05:42.589 "r_mbytes_per_sec": 0, 00:05:42.589 "w_mbytes_per_sec": 0 00:05:42.589 }, 00:05:42.589 "claimed": false, 00:05:42.589 "zoned": false, 00:05:42.589 "supported_io_types": { 00:05:42.589 "read": true, 00:05:42.589 "write": true, 00:05:42.589 "unmap": true, 00:05:42.589 "flush": true, 00:05:42.589 "reset": true, 00:05:42.589 "nvme_admin": false, 00:05:42.589 "nvme_io": false, 00:05:42.589 "nvme_io_md": false, 00:05:42.589 "write_zeroes": true, 00:05:42.589 "zcopy": true, 00:05:42.589 "get_zone_info": false, 00:05:42.589 "zone_management": false, 00:05:42.589 "zone_append": false, 00:05:42.589 "compare": false, 00:05:42.589 "compare_and_write": false, 00:05:42.589 "abort": true, 00:05:42.589 "seek_hole": false, 00:05:42.589 "seek_data": false, 00:05:42.589 "copy": true, 00:05:42.589 "nvme_iov_md": false 00:05:42.589 }, 00:05:42.589 "memory_domains": [ 00:05:42.589 { 00:05:42.589 "dma_device_id": "system", 00:05:42.589 "dma_device_type": 1 00:05:42.589 }, 00:05:42.589 { 00:05:42.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.589 "dma_device_type": 2 00:05:42.589 } 00:05:42.589 ], 00:05:42.589 "driver_specific": {} 00:05:42.589 } 00:05:42.589 ]' 00:05:42.589 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 [2024-07-14 18:37:30.854855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.849 [2024-07-14 18:37:30.854905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.849 [2024-07-14 18:37:30.854953] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x180c5b0 00:05:42.849 [2024-07-14 18:37:30.854968] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.849 [2024-07-14 18:37:30.856308] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.849 [2024-07-14 18:37:30.856337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.849 Passthru0 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.849 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.849 { 00:05:42.849 "name": "Malloc2", 00:05:42.849 "aliases": [ 00:05:42.849 "65291984-ef3a-455c-8ed4-4aaed0a394d1" 00:05:42.849 ], 00:05:42.849 "product_name": "Malloc disk", 00:05:42.849 "block_size": 512, 00:05:42.849 "num_blocks": 16384, 00:05:42.849 "uuid": "65291984-ef3a-455c-8ed4-4aaed0a394d1", 00:05:42.849 "assigned_rate_limits": { 00:05:42.849 "rw_ios_per_sec": 0, 00:05:42.849 "rw_mbytes_per_sec": 0, 00:05:42.849 "r_mbytes_per_sec": 0, 00:05:42.849 "w_mbytes_per_sec": 0 00:05:42.849 }, 00:05:42.849 "claimed": true, 00:05:42.849 "claim_type": "exclusive_write", 00:05:42.849 "zoned": false, 00:05:42.849 "supported_io_types": { 00:05:42.849 "read": true, 00:05:42.849 "write": true, 00:05:42.849 "unmap": true, 00:05:42.849 "flush": true, 00:05:42.849 "reset": true, 00:05:42.849 "nvme_admin": false, 00:05:42.849 "nvme_io": false, 00:05:42.849 "nvme_io_md": false, 00:05:42.849 "write_zeroes": true, 00:05:42.849 "zcopy": true, 00:05:42.849 "get_zone_info": false, 00:05:42.849 "zone_management": false, 00:05:42.849 "zone_append": false, 00:05:42.849 "compare": false, 00:05:42.849 "compare_and_write": false, 00:05:42.849 "abort": true, 00:05:42.849 "seek_hole": false, 00:05:42.849 "seek_data": false, 00:05:42.849 "copy": true, 00:05:42.849 "nvme_iov_md": false 00:05:42.849 }, 00:05:42.849 "memory_domains": [ 00:05:42.849 { 00:05:42.850 "dma_device_id": "system", 00:05:42.850 "dma_device_type": 1 00:05:42.850 }, 00:05:42.850 { 00:05:42.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.850 "dma_device_type": 2 00:05:42.850 } 00:05:42.850 ], 00:05:42.850 "driver_specific": {} 00:05:42.850 }, 00:05:42.850 { 00:05:42.850 "name": "Passthru0", 00:05:42.850 "aliases": [ 00:05:42.850 "6bbbcfff-91df-568e-81f7-5381f62d5192" 00:05:42.850 ], 00:05:42.850 "product_name": "passthru", 00:05:42.850 "block_size": 512, 00:05:42.850 "num_blocks": 16384, 00:05:42.850 "uuid": "6bbbcfff-91df-568e-81f7-5381f62d5192", 00:05:42.850 "assigned_rate_limits": { 00:05:42.850 "rw_ios_per_sec": 0, 00:05:42.850 "rw_mbytes_per_sec": 0, 00:05:42.850 "r_mbytes_per_sec": 0, 00:05:42.850 "w_mbytes_per_sec": 0 00:05:42.850 }, 00:05:42.850 "claimed": false, 00:05:42.850 "zoned": false, 00:05:42.850 "supported_io_types": { 00:05:42.850 "read": true, 00:05:42.850 "write": true, 00:05:42.850 "unmap": true, 00:05:42.850 "flush": true, 00:05:42.850 "reset": true, 00:05:42.850 "nvme_admin": false, 00:05:42.850 "nvme_io": false, 00:05:42.850 "nvme_io_md": false, 00:05:42.850 "write_zeroes": true, 00:05:42.850 "zcopy": true, 00:05:42.850 "get_zone_info": false, 00:05:42.850 "zone_management": false, 00:05:42.850 "zone_append": false, 00:05:42.850 "compare": false, 00:05:42.850 "compare_and_write": false, 00:05:42.850 "abort": true, 00:05:42.850 "seek_hole": false, 00:05:42.850 "seek_data": false, 00:05:42.850 "copy": true, 00:05:42.850 "nvme_iov_md": false 00:05:42.850 }, 00:05:42.850 "memory_domains": [ 00:05:42.850 { 00:05:42.850 "dma_device_id": "system", 00:05:42.850 "dma_device_type": 1 00:05:42.850 }, 00:05:42.850 { 00:05:42.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.850 "dma_device_type": 2 00:05:42.850 } 00:05:42.850 ], 00:05:42.850 "driver_specific": { 00:05:42.850 "passthru": { 00:05:42.850 "name": "Passthru0", 00:05:42.850 "base_bdev_name": "Malloc2" 00:05:42.850 } 00:05:42.850 } 00:05:42.850 } 00:05:42.850 ]' 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.850 00:05:42.850 real 0m0.230s 00:05:42.850 user 0m0.149s 00:05:42.850 sys 0m0.023s 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.850 18:37:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.850 ************************************ 00:05:42.850 END TEST rpc_daemon_integrity 00:05:42.850 ************************************ 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.850 18:37:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.850 18:37:30 rpc -- rpc/rpc.sh@84 -- # killprocess 3463227 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 3463227 ']' 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@952 -- # kill -0 3463227 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@953 -- # uname 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.850 18:37:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3463227 00:05:42.850 18:37:31 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.850 18:37:31 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.850 18:37:31 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3463227' 00:05:42.850 killing process with pid 3463227 00:05:42.850 18:37:31 rpc -- common/autotest_common.sh@967 -- # kill 3463227 00:05:42.850 18:37:31 rpc -- common/autotest_common.sh@972 -- # wait 3463227 00:05:43.419 00:05:43.419 real 0m1.927s 00:05:43.419 user 0m2.419s 00:05:43.419 sys 0m0.618s 00:05:43.419 18:37:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.419 18:37:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.419 ************************************ 00:05:43.419 END TEST rpc 00:05:43.419 ************************************ 00:05:43.419 18:37:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.419 18:37:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.419 18:37:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.419 18:37:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.419 18:37:31 -- common/autotest_common.sh@10 -- # set +x 00:05:43.419 ************************************ 00:05:43.419 START TEST skip_rpc 00:05:43.419 ************************************ 00:05:43.419 18:37:31 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.419 * Looking for test storage... 00:05:43.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.419 18:37:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.419 18:37:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.419 18:37:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.419 18:37:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.419 18:37:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.419 18:37:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.419 ************************************ 00:05:43.419 START TEST skip_rpc 00:05:43.419 ************************************ 00:05:43.419 18:37:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:43.419 18:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3463658 00:05:43.419 18:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.419 18:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.419 18:37:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.419 [2024-07-14 18:37:31.620840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:43.419 [2024-07-14 18:37:31.620939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463658 ] 00:05:43.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.678 [2024-07-14 18:37:31.682215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.678 [2024-07-14 18:37:31.773664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3463658 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3463658 ']' 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3463658 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3463658 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3463658' 00:05:48.959 killing process with pid 3463658 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3463658 00:05:48.959 18:37:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3463658 00:05:48.959 00:05:48.959 real 0m5.447s 00:05:48.959 user 0m5.121s 00:05:48.959 sys 0m0.327s 00:05:48.959 18:37:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.959 18:37:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.959 ************************************ 00:05:48.959 END TEST skip_rpc 00:05:48.959 ************************************ 00:05:48.959 18:37:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.959 18:37:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.959 18:37:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.959 18:37:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.959 18:37:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.959 ************************************ 00:05:48.959 START TEST skip_rpc_with_json 00:05:48.959 ************************************ 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3464345 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3464345 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3464345 ']' 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.959 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.959 [2024-07-14 18:37:37.114974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:48.959 [2024-07-14 18:37:37.115075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464345 ] 00:05:48.959 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.959 [2024-07-14 18:37:37.173231] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.218 [2024-07-14 18:37:37.261837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.478 [2024-07-14 18:37:37.503195] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.478 request: 00:05:49.478 { 00:05:49.478 "trtype": "tcp", 00:05:49.478 "method": "nvmf_get_transports", 00:05:49.478 "req_id": 1 00:05:49.478 } 00:05:49.478 Got JSON-RPC error response 00:05:49.478 response: 00:05:49.478 { 00:05:49.478 "code": -19, 00:05:49.478 "message": "No such device" 00:05:49.478 } 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.478 [2024-07-14 18:37:37.511324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.478 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.478 { 00:05:49.478 "subsystems": [ 00:05:49.478 { 00:05:49.478 "subsystem": "vfio_user_target", 00:05:49.478 "config": null 00:05:49.478 }, 00:05:49.478 { 00:05:49.478 "subsystem": "keyring", 00:05:49.478 "config": [] 00:05:49.478 }, 00:05:49.478 { 00:05:49.478 "subsystem": "iobuf", 00:05:49.478 "config": [ 00:05:49.478 { 00:05:49.478 "method": "iobuf_set_options", 00:05:49.478 "params": { 00:05:49.478 "small_pool_count": 8192, 00:05:49.478 "large_pool_count": 1024, 00:05:49.479 "small_bufsize": 8192, 00:05:49.479 "large_bufsize": 135168 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "sock", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "sock_set_default_impl", 00:05:49.479 "params": { 00:05:49.479 "impl_name": "posix" 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "sock_impl_set_options", 00:05:49.479 "params": { 00:05:49.479 "impl_name": "ssl", 00:05:49.479 "recv_buf_size": 4096, 00:05:49.479 "send_buf_size": 4096, 00:05:49.479 "enable_recv_pipe": true, 00:05:49.479 "enable_quickack": false, 00:05:49.479 "enable_placement_id": 0, 00:05:49.479 "enable_zerocopy_send_server": true, 00:05:49.479 "enable_zerocopy_send_client": false, 00:05:49.479 "zerocopy_threshold": 0, 00:05:49.479 "tls_version": 0, 00:05:49.479 "enable_ktls": false 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "sock_impl_set_options", 00:05:49.479 "params": { 00:05:49.479 "impl_name": "posix", 00:05:49.479 "recv_buf_size": 2097152, 00:05:49.479 "send_buf_size": 2097152, 00:05:49.479 "enable_recv_pipe": true, 00:05:49.479 "enable_quickack": false, 00:05:49.479 "enable_placement_id": 0, 00:05:49.479 "enable_zerocopy_send_server": true, 00:05:49.479 "enable_zerocopy_send_client": false, 00:05:49.479 "zerocopy_threshold": 0, 00:05:49.479 "tls_version": 0, 00:05:49.479 "enable_ktls": false 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "vmd", 00:05:49.479 "config": [] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "accel", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "accel_set_options", 00:05:49.479 "params": { 00:05:49.479 "small_cache_size": 128, 00:05:49.479 "large_cache_size": 16, 00:05:49.479 "task_count": 2048, 00:05:49.479 "sequence_count": 2048, 00:05:49.479 "buf_count": 2048 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "bdev", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "bdev_set_options", 00:05:49.479 "params": { 00:05:49.479 "bdev_io_pool_size": 65535, 00:05:49.479 "bdev_io_cache_size": 256, 00:05:49.479 "bdev_auto_examine": true, 00:05:49.479 "iobuf_small_cache_size": 128, 00:05:49.479 "iobuf_large_cache_size": 16 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "bdev_raid_set_options", 00:05:49.479 "params": { 00:05:49.479 "process_window_size_kb": 1024 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "bdev_iscsi_set_options", 00:05:49.479 "params": { 00:05:49.479 "timeout_sec": 30 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "bdev_nvme_set_options", 00:05:49.479 "params": { 00:05:49.479 "action_on_timeout": "none", 00:05:49.479 "timeout_us": 0, 00:05:49.479 "timeout_admin_us": 0, 00:05:49.479 "keep_alive_timeout_ms": 10000, 00:05:49.479 "arbitration_burst": 0, 00:05:49.479 "low_priority_weight": 0, 00:05:49.479 "medium_priority_weight": 0, 00:05:49.479 "high_priority_weight": 0, 00:05:49.479 "nvme_adminq_poll_period_us": 10000, 00:05:49.479 "nvme_ioq_poll_period_us": 0, 00:05:49.479 "io_queue_requests": 0, 00:05:49.479 "delay_cmd_submit": true, 00:05:49.479 "transport_retry_count": 4, 00:05:49.479 "bdev_retry_count": 3, 00:05:49.479 "transport_ack_timeout": 0, 00:05:49.479 "ctrlr_loss_timeout_sec": 0, 00:05:49.479 "reconnect_delay_sec": 0, 00:05:49.479 "fast_io_fail_timeout_sec": 0, 00:05:49.479 "disable_auto_failback": false, 00:05:49.479 "generate_uuids": false, 00:05:49.479 "transport_tos": 0, 00:05:49.479 "nvme_error_stat": false, 00:05:49.479 "rdma_srq_size": 0, 00:05:49.479 "io_path_stat": false, 00:05:49.479 "allow_accel_sequence": false, 00:05:49.479 "rdma_max_cq_size": 0, 00:05:49.479 "rdma_cm_event_timeout_ms": 0, 00:05:49.479 "dhchap_digests": [ 00:05:49.479 "sha256", 00:05:49.479 "sha384", 00:05:49.479 "sha512" 00:05:49.479 ], 00:05:49.479 "dhchap_dhgroups": [ 00:05:49.479 "null", 00:05:49.479 "ffdhe2048", 00:05:49.479 "ffdhe3072", 00:05:49.479 "ffdhe4096", 00:05:49.479 "ffdhe6144", 00:05:49.479 "ffdhe8192" 00:05:49.479 ] 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "bdev_nvme_set_hotplug", 00:05:49.479 "params": { 00:05:49.479 "period_us": 100000, 00:05:49.479 "enable": false 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "bdev_wait_for_examine" 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "scsi", 00:05:49.479 "config": null 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "scheduler", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "framework_set_scheduler", 00:05:49.479 "params": { 00:05:49.479 "name": "static" 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "vhost_scsi", 00:05:49.479 "config": [] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "vhost_blk", 00:05:49.479 "config": [] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "ublk", 00:05:49.479 "config": [] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "nbd", 00:05:49.479 "config": [] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "nvmf", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "nvmf_set_config", 00:05:49.479 "params": { 00:05:49.479 "discovery_filter": "match_any", 00:05:49.479 "admin_cmd_passthru": { 00:05:49.479 "identify_ctrlr": false 00:05:49.479 } 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "nvmf_set_max_subsystems", 00:05:49.479 "params": { 00:05:49.479 "max_subsystems": 1024 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "nvmf_set_crdt", 00:05:49.479 "params": { 00:05:49.479 "crdt1": 0, 00:05:49.479 "crdt2": 0, 00:05:49.479 "crdt3": 0 00:05:49.479 } 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "method": "nvmf_create_transport", 00:05:49.479 "params": { 00:05:49.479 "trtype": "TCP", 00:05:49.479 "max_queue_depth": 128, 00:05:49.479 "max_io_qpairs_per_ctrlr": 127, 00:05:49.479 "in_capsule_data_size": 4096, 00:05:49.479 "max_io_size": 131072, 00:05:49.479 "io_unit_size": 131072, 00:05:49.479 "max_aq_depth": 128, 00:05:49.479 "num_shared_buffers": 511, 00:05:49.479 "buf_cache_size": 4294967295, 00:05:49.479 "dif_insert_or_strip": false, 00:05:49.479 "zcopy": false, 00:05:49.479 "c2h_success": true, 00:05:49.479 "sock_priority": 0, 00:05:49.479 "abort_timeout_sec": 1, 00:05:49.479 "ack_timeout": 0, 00:05:49.479 "data_wr_pool_size": 0 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 }, 00:05:49.479 { 00:05:49.479 "subsystem": "iscsi", 00:05:49.479 "config": [ 00:05:49.479 { 00:05:49.479 "method": "iscsi_set_options", 00:05:49.479 "params": { 00:05:49.479 "node_base": "iqn.2016-06.io.spdk", 00:05:49.479 "max_sessions": 128, 00:05:49.479 "max_connections_per_session": 2, 00:05:49.479 "max_queue_depth": 64, 00:05:49.479 "default_time2wait": 2, 00:05:49.479 "default_time2retain": 20, 00:05:49.479 "first_burst_length": 8192, 00:05:49.479 "immediate_data": true, 00:05:49.479 "allow_duplicated_isid": false, 00:05:49.479 "error_recovery_level": 0, 00:05:49.479 "nop_timeout": 60, 00:05:49.479 "nop_in_interval": 30, 00:05:49.479 "disable_chap": false, 00:05:49.479 "require_chap": false, 00:05:49.479 "mutual_chap": false, 00:05:49.479 "chap_group": 0, 00:05:49.479 "max_large_datain_per_connection": 64, 00:05:49.479 "max_r2t_per_connection": 4, 00:05:49.479 "pdu_pool_size": 36864, 00:05:49.479 "immediate_data_pool_size": 16384, 00:05:49.479 "data_out_pool_size": 2048 00:05:49.479 } 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 } 00:05:49.479 ] 00:05:49.479 } 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3464345 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3464345 ']' 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3464345 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464345 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464345' 00:05:49.479 killing process with pid 3464345 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3464345 00:05:49.479 18:37:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3464345 00:05:50.111 18:37:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3464487 00:05:50.111 18:37:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.111 18:37:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3464487 ']' 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464487' 00:05:55.382 killing process with pid 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3464487 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.382 00:05:55.382 real 0m6.469s 00:05:55.382 user 0m6.072s 00:05:55.382 sys 0m0.685s 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.382 ************************************ 00:05:55.382 END TEST skip_rpc_with_json 00:05:55.382 ************************************ 00:05:55.382 18:37:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.382 18:37:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.382 18:37:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.382 18:37:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.382 18:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.382 ************************************ 00:05:55.382 START TEST skip_rpc_with_delay 00:05:55.382 ************************************ 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.382 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.641 [2024-07-14 18:37:43.631811] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.641 [2024-07-14 18:37:43.631948] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.641 00:05:55.641 real 0m0.068s 00:05:55.641 user 0m0.044s 00:05:55.641 sys 0m0.024s 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.641 18:37:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.641 ************************************ 00:05:55.641 END TEST skip_rpc_with_delay 00:05:55.641 ************************************ 00:05:55.641 18:37:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:55.641 18:37:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.641 18:37:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.641 18:37:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.641 18:37:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.641 18:37:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.641 18:37:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.641 ************************************ 00:05:55.641 START TEST exit_on_failed_rpc_init 00:05:55.641 ************************************ 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3465205 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3465205 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3465205 ']' 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.641 18:37:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.641 [2024-07-14 18:37:43.749605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:55.641 [2024-07-14 18:37:43.749706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465205 ] 00:05:55.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.641 [2024-07-14 18:37:43.807639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.900 [2024-07-14 18:37:43.896190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.158 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:56.159 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:56.159 [2024-07-14 18:37:44.194951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:56.159 [2024-07-14 18:37:44.195050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465215 ] 00:05:56.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.159 [2024-07-14 18:37:44.258109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.159 [2024-07-14 18:37:44.350369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.159 [2024-07-14 18:37:44.350492] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:56.159 [2024-07-14 18:37:44.350513] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:56.159 [2024-07-14 18:37:44.350526] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.418 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3465205 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3465205 ']' 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3465205 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3465205 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3465205' 00:05:56.419 killing process with pid 3465205 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3465205 00:05:56.419 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3465205 00:05:56.677 00:05:56.677 real 0m1.172s 00:05:56.677 user 0m1.306s 00:05:56.677 sys 0m0.423s 00:05:56.677 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.677 18:37:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.677 ************************************ 00:05:56.677 END TEST exit_on_failed_rpc_init 00:05:56.677 ************************************ 00:05:56.677 18:37:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.677 18:37:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.677 00:05:56.677 real 0m13.402s 00:05:56.677 user 0m12.646s 00:05:56.677 sys 0m1.619s 00:05:56.677 18:37:44 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.677 18:37:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.677 ************************************ 00:05:56.677 END TEST skip_rpc 00:05:56.677 ************************************ 00:05:56.937 18:37:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.937 18:37:44 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.937 18:37:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.937 18:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.937 18:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:56.937 ************************************ 00:05:56.937 START TEST rpc_client 00:05:56.937 ************************************ 00:05:56.937 18:37:44 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.937 * Looking for test storage... 00:05:56.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:56.937 18:37:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:56.937 OK 00:05:56.937 18:37:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.937 00:05:56.937 real 0m0.060s 00:05:56.937 user 0m0.021s 00:05:56.937 sys 0m0.043s 00:05:56.937 18:37:45 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.937 18:37:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.937 ************************************ 00:05:56.937 END TEST rpc_client 00:05:56.937 ************************************ 00:05:56.937 18:37:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.937 18:37:45 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.937 18:37:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.937 18:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.937 18:37:45 -- common/autotest_common.sh@10 -- # set +x 00:05:56.937 ************************************ 00:05:56.937 START TEST json_config 00:05:56.937 ************************************ 00:05:56.937 18:37:45 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.937 18:37:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.937 18:37:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.937 18:37:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.937 18:37:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.937 18:37:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.937 18:37:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.937 18:37:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.937 18:37:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:56.937 18:37:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:56.937 INFO: JSON configuration test init 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:56.937 18:37:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.937 18:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.937 18:37:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.938 18:37:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:56.938 18:37:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.938 18:37:45 json_config -- json_config/common.sh@10 -- # shift 00:05:56.938 18:37:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.938 18:37:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.938 18:37:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.938 18:37:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.938 18:37:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.938 18:37:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3465459 00:05:56.938 18:37:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:56.938 18:37:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.938 Waiting for target to run... 00:05:56.938 18:37:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3465459 /var/tmp/spdk_tgt.sock 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 3465459 ']' 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.938 18:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.938 [2024-07-14 18:37:45.154192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:56.938 [2024-07-14 18:37:45.154275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465459 ] 00:05:57.195 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.453 [2024-07-14 18:37:45.646156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.711 [2024-07-14 18:37:45.724314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:57.968 18:37:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.968 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.968 18:37:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:57.968 18:37:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:57.968 18:37:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:01.259 18:37:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.259 18:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:01.259 18:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.259 18:37:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:01.517 18:37:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.517 18:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:01.517 18:37:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.517 18:37:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:01.517 18:37:49 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.517 18:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.776 MallocForNvmf0 00:06:01.776 18:37:49 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.776 18:37:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.034 MallocForNvmf1 00:06:02.034 18:37:50 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.034 18:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.292 [2024-07-14 18:37:50.291138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.292 18:37:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.292 18:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:02.550 18:37:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.550 18:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:02.810 18:37:50 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.810 18:37:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:03.069 18:37:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.069 18:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:03.069 [2024-07-14 18:37:51.258390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:03.069 18:37:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:03.069 18:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.069 18:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.329 18:37:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:03.329 18:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.329 18:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.329 18:37:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:03.329 18:37:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.329 18:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:03.587 MallocBdevForConfigChangeCheck 00:06:03.587 18:37:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:03.587 18:37:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.587 18:37:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.587 18:37:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:03.587 18:37:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.846 18:37:51 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:03.846 INFO: shutting down applications... 00:06:03.846 18:37:51 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:03.846 18:37:51 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:03.846 18:37:51 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:03.846 18:37:51 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.754 Calling clear_iscsi_subsystem 00:06:05.754 Calling clear_nvmf_subsystem 00:06:05.754 Calling clear_nbd_subsystem 00:06:05.754 Calling clear_ublk_subsystem 00:06:05.754 Calling clear_vhost_blk_subsystem 00:06:05.754 Calling clear_vhost_scsi_subsystem 00:06:05.754 Calling clear_bdev_subsystem 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:05.755 18:37:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:06.015 18:37:53 json_config -- json_config/json_config.sh@345 -- # break 00:06:06.015 18:37:53 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:06.015 18:37:53 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:06.015 18:37:53 json_config -- json_config/common.sh@31 -- # local app=target 00:06:06.015 18:37:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.015 18:37:53 json_config -- json_config/common.sh@35 -- # [[ -n 3465459 ]] 00:06:06.015 18:37:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3465459 00:06:06.015 18:37:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.015 18:37:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.015 18:37:53 json_config -- json_config/common.sh@41 -- # kill -0 3465459 00:06:06.015 18:37:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.274 18:37:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.274 18:37:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.274 18:37:54 json_config -- json_config/common.sh@41 -- # kill -0 3465459 00:06:06.274 18:37:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.274 18:37:54 json_config -- json_config/common.sh@43 -- # break 00:06:06.274 18:37:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.274 18:37:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.274 SPDK target shutdown done 00:06:06.274 18:37:54 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:06.274 INFO: relaunching applications... 00:06:06.274 18:37:54 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.274 18:37:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:06.275 18:37:54 json_config -- json_config/common.sh@10 -- # shift 00:06:06.275 18:37:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.275 18:37:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.275 18:37:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.275 18:37:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.275 18:37:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.275 18:37:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3466710 00:06:06.275 18:37:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.275 18:37:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.275 Waiting for target to run... 00:06:06.275 18:37:54 json_config -- json_config/common.sh@25 -- # waitforlisten 3466710 /var/tmp/spdk_tgt.sock 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@829 -- # '[' -z 3466710 ']' 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.275 18:37:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.533 [2024-07-14 18:37:54.544772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:06.533 [2024-07-14 18:37:54.544860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466710 ] 00:06:06.533 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.101 [2024-07-14 18:37:55.071988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.101 [2024-07-14 18:37:55.154250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.379 [2024-07-14 18:37:58.184576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.379 [2024-07-14 18:37:58.217024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.942 18:37:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.942 18:37:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:10.942 18:37:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.942 00:06:10.942 18:37:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:10.942 18:37:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:10.942 INFO: Checking if target configuration is the same... 00:06:10.942 18:37:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.942 18:37:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:10.942 18:37:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.942 + '[' 2 -ne 2 ']' 00:06:10.942 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.942 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:10.942 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.942 +++ basename /dev/fd/62 00:06:10.942 ++ mktemp /tmp/62.XXX 00:06:10.942 + tmp_file_1=/tmp/62.v3u 00:06:10.942 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.942 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.942 + tmp_file_2=/tmp/spdk_tgt_config.json.6cN 00:06:10.942 + ret=0 00:06:10.942 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.199 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:11.199 + diff -u /tmp/62.v3u /tmp/spdk_tgt_config.json.6cN 00:06:11.199 + echo 'INFO: JSON config files are the same' 00:06:11.199 INFO: JSON config files are the same 00:06:11.199 + rm /tmp/62.v3u /tmp/spdk_tgt_config.json.6cN 00:06:11.199 + exit 0 00:06:11.199 18:37:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:11.199 18:37:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:11.199 INFO: changing configuration and checking if this can be detected... 00:06:11.199 18:37:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.199 18:37:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:11.455 18:37:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.455 18:37:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:11.455 18:37:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.455 + '[' 2 -ne 2 ']' 00:06:11.455 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:11.455 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:11.455 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:11.455 +++ basename /dev/fd/62 00:06:11.455 ++ mktemp /tmp/62.XXX 00:06:11.455 + tmp_file_1=/tmp/62.iiA 00:06:11.455 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:11.455 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:11.455 + tmp_file_2=/tmp/spdk_tgt_config.json.pjU 00:06:11.455 + ret=0 00:06:11.455 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.018 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.018 + diff -u /tmp/62.iiA /tmp/spdk_tgt_config.json.pjU 00:06:12.018 + ret=1 00:06:12.018 + echo '=== Start of file: /tmp/62.iiA ===' 00:06:12.018 + cat /tmp/62.iiA 00:06:12.018 + echo '=== End of file: /tmp/62.iiA ===' 00:06:12.018 + echo '' 00:06:12.018 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pjU ===' 00:06:12.018 + cat /tmp/spdk_tgt_config.json.pjU 00:06:12.018 + echo '=== End of file: /tmp/spdk_tgt_config.json.pjU ===' 00:06:12.018 + echo '' 00:06:12.018 + rm /tmp/62.iiA /tmp/spdk_tgt_config.json.pjU 00:06:12.018 + exit 1 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:12.018 INFO: configuration change detected. 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 3466710 ]] 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.018 18:38:00 json_config -- json_config/json_config.sh@323 -- # killprocess 3466710 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@948 -- # '[' -z 3466710 ']' 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@952 -- # kill -0 3466710 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@953 -- # uname 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3466710 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3466710' 00:06:12.018 killing process with pid 3466710 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@967 -- # kill 3466710 00:06:12.018 18:38:00 json_config -- common/autotest_common.sh@972 -- # wait 3466710 00:06:13.955 18:38:01 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.955 18:38:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:13.955 18:38:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.956 18:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 18:38:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:13.956 18:38:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:13.956 INFO: Success 00:06:13.956 00:06:13.956 real 0m16.701s 00:06:13.956 user 0m18.495s 00:06:13.956 sys 0m2.202s 00:06:13.956 18:38:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.956 18:38:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 ************************************ 00:06:13.956 END TEST json_config 00:06:13.956 ************************************ 00:06:13.956 18:38:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.956 18:38:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.956 18:38:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.956 18:38:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.956 18:38:01 -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 ************************************ 00:06:13.956 START TEST json_config_extra_key 00:06:13.956 ************************************ 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.956 18:38:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.956 18:38:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.956 18:38:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.956 18:38:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.956 18:38:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.956 18:38:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.956 18:38:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.956 18:38:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.956 18:38:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.956 INFO: launching applications... 00:06:13.956 18:38:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3467694 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.956 Waiting for target to run... 00:06:13.956 18:38:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3467694 /var/tmp/spdk_tgt.sock 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3467694 ']' 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.956 18:38:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.956 [2024-07-14 18:38:01.894500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:13.956 [2024-07-14 18:38:01.894586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467694 ] 00:06:13.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.215 [2024-07-14 18:38:02.238855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.215 [2024-07-14 18:38:02.302468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.781 18:38:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.781 18:38:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.781 00:06:14.781 18:38:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.781 INFO: shutting down applications... 00:06:14.781 18:38:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3467694 ]] 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3467694 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3467694 00:06:14.781 18:38:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3467694 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.347 18:38:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.347 SPDK target shutdown done 00:06:15.347 18:38:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.347 Success 00:06:15.347 00:06:15.347 real 0m1.540s 00:06:15.347 user 0m1.511s 00:06:15.347 sys 0m0.417s 00:06:15.347 18:38:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.347 18:38:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 ************************************ 00:06:15.347 END TEST json_config_extra_key 00:06:15.347 ************************************ 00:06:15.347 18:38:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.347 18:38:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.347 18:38:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.347 18:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.347 18:38:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 ************************************ 00:06:15.347 START TEST alias_rpc 00:06:15.347 ************************************ 00:06:15.347 18:38:03 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.347 * Looking for test storage... 00:06:15.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:15.347 18:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.347 18:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3467877 00:06:15.347 18:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.347 18:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3467877 00:06:15.347 18:38:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3467877 ']' 00:06:15.348 18:38:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.348 18:38:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.348 18:38:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.348 18:38:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.348 18:38:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.348 [2024-07-14 18:38:03.481774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:15.348 [2024-07-14 18:38:03.481864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467877 ] 00:06:15.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.348 [2024-07-14 18:38:03.543155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.605 [2024-07-14 18:38:03.628506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.862 18:38:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.862 18:38:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.862 18:38:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:16.119 18:38:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3467877 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3467877 ']' 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3467877 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3467877 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3467877' 00:06:16.119 killing process with pid 3467877 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@967 -- # kill 3467877 00:06:16.119 18:38:04 alias_rpc -- common/autotest_common.sh@972 -- # wait 3467877 00:06:16.686 00:06:16.686 real 0m1.226s 00:06:16.686 user 0m1.299s 00:06:16.686 sys 0m0.439s 00:06:16.686 18:38:04 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.686 18:38:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 ************************************ 00:06:16.686 END TEST alias_rpc 00:06:16.686 ************************************ 00:06:16.686 18:38:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.686 18:38:04 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:16.686 18:38:04 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.686 18:38:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.686 18:38:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.686 18:38:04 -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 ************************************ 00:06:16.686 START TEST spdkcli_tcp 00:06:16.686 ************************************ 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:16.686 * Looking for test storage... 00:06:16.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3468178 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:16.686 18:38:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3468178 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3468178 ']' 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.686 18:38:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 [2024-07-14 18:38:04.760225] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:16.686 [2024-07-14 18:38:04.760304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468178 ] 00:06:16.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.686 [2024-07-14 18:38:04.820125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.686 [2024-07-14 18:38:04.904806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.686 [2024-07-14 18:38:04.904810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.944 18:38:05 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.944 18:38:05 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:16.944 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3468197 00:06:16.944 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:16.944 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:17.203 [ 00:06:17.203 "bdev_malloc_delete", 00:06:17.203 "bdev_malloc_create", 00:06:17.203 "bdev_null_resize", 00:06:17.203 "bdev_null_delete", 00:06:17.203 "bdev_null_create", 00:06:17.203 "bdev_nvme_cuse_unregister", 00:06:17.203 "bdev_nvme_cuse_register", 00:06:17.203 "bdev_opal_new_user", 00:06:17.203 "bdev_opal_set_lock_state", 00:06:17.203 "bdev_opal_delete", 00:06:17.203 "bdev_opal_get_info", 00:06:17.203 "bdev_opal_create", 00:06:17.203 "bdev_nvme_opal_revert", 00:06:17.203 "bdev_nvme_opal_init", 00:06:17.203 "bdev_nvme_send_cmd", 00:06:17.203 "bdev_nvme_get_path_iostat", 00:06:17.203 "bdev_nvme_get_mdns_discovery_info", 00:06:17.203 "bdev_nvme_stop_mdns_discovery", 00:06:17.203 "bdev_nvme_start_mdns_discovery", 00:06:17.203 "bdev_nvme_set_multipath_policy", 00:06:17.203 "bdev_nvme_set_preferred_path", 00:06:17.203 "bdev_nvme_get_io_paths", 00:06:17.203 "bdev_nvme_remove_error_injection", 00:06:17.203 "bdev_nvme_add_error_injection", 00:06:17.203 "bdev_nvme_get_discovery_info", 00:06:17.203 "bdev_nvme_stop_discovery", 00:06:17.203 "bdev_nvme_start_discovery", 00:06:17.203 "bdev_nvme_get_controller_health_info", 00:06:17.203 "bdev_nvme_disable_controller", 00:06:17.203 "bdev_nvme_enable_controller", 00:06:17.203 "bdev_nvme_reset_controller", 00:06:17.203 "bdev_nvme_get_transport_statistics", 00:06:17.203 "bdev_nvme_apply_firmware", 00:06:17.203 "bdev_nvme_detach_controller", 00:06:17.203 "bdev_nvme_get_controllers", 00:06:17.203 "bdev_nvme_attach_controller", 00:06:17.203 "bdev_nvme_set_hotplug", 00:06:17.203 "bdev_nvme_set_options", 00:06:17.203 "bdev_passthru_delete", 00:06:17.203 "bdev_passthru_create", 00:06:17.203 "bdev_lvol_set_parent_bdev", 00:06:17.203 "bdev_lvol_set_parent", 00:06:17.203 "bdev_lvol_check_shallow_copy", 00:06:17.203 "bdev_lvol_start_shallow_copy", 00:06:17.203 "bdev_lvol_grow_lvstore", 00:06:17.203 "bdev_lvol_get_lvols", 00:06:17.203 "bdev_lvol_get_lvstores", 00:06:17.203 "bdev_lvol_delete", 00:06:17.203 "bdev_lvol_set_read_only", 00:06:17.203 "bdev_lvol_resize", 00:06:17.203 "bdev_lvol_decouple_parent", 00:06:17.203 "bdev_lvol_inflate", 00:06:17.203 "bdev_lvol_rename", 00:06:17.203 "bdev_lvol_clone_bdev", 00:06:17.203 "bdev_lvol_clone", 00:06:17.203 "bdev_lvol_snapshot", 00:06:17.203 "bdev_lvol_create", 00:06:17.203 "bdev_lvol_delete_lvstore", 00:06:17.203 "bdev_lvol_rename_lvstore", 00:06:17.203 "bdev_lvol_create_lvstore", 00:06:17.203 "bdev_raid_set_options", 00:06:17.203 "bdev_raid_remove_base_bdev", 00:06:17.203 "bdev_raid_add_base_bdev", 00:06:17.203 "bdev_raid_delete", 00:06:17.203 "bdev_raid_create", 00:06:17.203 "bdev_raid_get_bdevs", 00:06:17.203 "bdev_error_inject_error", 00:06:17.203 "bdev_error_delete", 00:06:17.203 "bdev_error_create", 00:06:17.203 "bdev_split_delete", 00:06:17.203 "bdev_split_create", 00:06:17.203 "bdev_delay_delete", 00:06:17.203 "bdev_delay_create", 00:06:17.203 "bdev_delay_update_latency", 00:06:17.203 "bdev_zone_block_delete", 00:06:17.203 "bdev_zone_block_create", 00:06:17.203 "blobfs_create", 00:06:17.203 "blobfs_detect", 00:06:17.203 "blobfs_set_cache_size", 00:06:17.203 "bdev_aio_delete", 00:06:17.203 "bdev_aio_rescan", 00:06:17.203 "bdev_aio_create", 00:06:17.203 "bdev_ftl_set_property", 00:06:17.203 "bdev_ftl_get_properties", 00:06:17.203 "bdev_ftl_get_stats", 00:06:17.203 "bdev_ftl_unmap", 00:06:17.203 "bdev_ftl_unload", 00:06:17.203 "bdev_ftl_delete", 00:06:17.203 "bdev_ftl_load", 00:06:17.203 "bdev_ftl_create", 00:06:17.203 "bdev_virtio_attach_controller", 00:06:17.203 "bdev_virtio_scsi_get_devices", 00:06:17.203 "bdev_virtio_detach_controller", 00:06:17.203 "bdev_virtio_blk_set_hotplug", 00:06:17.203 "bdev_iscsi_delete", 00:06:17.203 "bdev_iscsi_create", 00:06:17.203 "bdev_iscsi_set_options", 00:06:17.203 "accel_error_inject_error", 00:06:17.203 "ioat_scan_accel_module", 00:06:17.203 "dsa_scan_accel_module", 00:06:17.203 "iaa_scan_accel_module", 00:06:17.203 "vfu_virtio_create_scsi_endpoint", 00:06:17.203 "vfu_virtio_scsi_remove_target", 00:06:17.203 "vfu_virtio_scsi_add_target", 00:06:17.203 "vfu_virtio_create_blk_endpoint", 00:06:17.203 "vfu_virtio_delete_endpoint", 00:06:17.203 "keyring_file_remove_key", 00:06:17.203 "keyring_file_add_key", 00:06:17.203 "keyring_linux_set_options", 00:06:17.203 "iscsi_get_histogram", 00:06:17.203 "iscsi_enable_histogram", 00:06:17.203 "iscsi_set_options", 00:06:17.203 "iscsi_get_auth_groups", 00:06:17.203 "iscsi_auth_group_remove_secret", 00:06:17.203 "iscsi_auth_group_add_secret", 00:06:17.203 "iscsi_delete_auth_group", 00:06:17.203 "iscsi_create_auth_group", 00:06:17.203 "iscsi_set_discovery_auth", 00:06:17.203 "iscsi_get_options", 00:06:17.203 "iscsi_target_node_request_logout", 00:06:17.203 "iscsi_target_node_set_redirect", 00:06:17.203 "iscsi_target_node_set_auth", 00:06:17.203 "iscsi_target_node_add_lun", 00:06:17.203 "iscsi_get_stats", 00:06:17.203 "iscsi_get_connections", 00:06:17.203 "iscsi_portal_group_set_auth", 00:06:17.203 "iscsi_start_portal_group", 00:06:17.203 "iscsi_delete_portal_group", 00:06:17.203 "iscsi_create_portal_group", 00:06:17.203 "iscsi_get_portal_groups", 00:06:17.203 "iscsi_delete_target_node", 00:06:17.203 "iscsi_target_node_remove_pg_ig_maps", 00:06:17.203 "iscsi_target_node_add_pg_ig_maps", 00:06:17.203 "iscsi_create_target_node", 00:06:17.203 "iscsi_get_target_nodes", 00:06:17.203 "iscsi_delete_initiator_group", 00:06:17.203 "iscsi_initiator_group_remove_initiators", 00:06:17.203 "iscsi_initiator_group_add_initiators", 00:06:17.203 "iscsi_create_initiator_group", 00:06:17.203 "iscsi_get_initiator_groups", 00:06:17.203 "nvmf_set_crdt", 00:06:17.203 "nvmf_set_config", 00:06:17.203 "nvmf_set_max_subsystems", 00:06:17.203 "nvmf_stop_mdns_prr", 00:06:17.203 "nvmf_publish_mdns_prr", 00:06:17.203 "nvmf_subsystem_get_listeners", 00:06:17.203 "nvmf_subsystem_get_qpairs", 00:06:17.203 "nvmf_subsystem_get_controllers", 00:06:17.203 "nvmf_get_stats", 00:06:17.203 "nvmf_get_transports", 00:06:17.203 "nvmf_create_transport", 00:06:17.203 "nvmf_get_targets", 00:06:17.203 "nvmf_delete_target", 00:06:17.203 "nvmf_create_target", 00:06:17.203 "nvmf_subsystem_allow_any_host", 00:06:17.203 "nvmf_subsystem_remove_host", 00:06:17.203 "nvmf_subsystem_add_host", 00:06:17.203 "nvmf_ns_remove_host", 00:06:17.203 "nvmf_ns_add_host", 00:06:17.203 "nvmf_subsystem_remove_ns", 00:06:17.203 "nvmf_subsystem_add_ns", 00:06:17.203 "nvmf_subsystem_listener_set_ana_state", 00:06:17.203 "nvmf_discovery_get_referrals", 00:06:17.203 "nvmf_discovery_remove_referral", 00:06:17.203 "nvmf_discovery_add_referral", 00:06:17.203 "nvmf_subsystem_remove_listener", 00:06:17.203 "nvmf_subsystem_add_listener", 00:06:17.203 "nvmf_delete_subsystem", 00:06:17.203 "nvmf_create_subsystem", 00:06:17.203 "nvmf_get_subsystems", 00:06:17.203 "env_dpdk_get_mem_stats", 00:06:17.203 "nbd_get_disks", 00:06:17.203 "nbd_stop_disk", 00:06:17.203 "nbd_start_disk", 00:06:17.203 "ublk_recover_disk", 00:06:17.203 "ublk_get_disks", 00:06:17.203 "ublk_stop_disk", 00:06:17.203 "ublk_start_disk", 00:06:17.203 "ublk_destroy_target", 00:06:17.203 "ublk_create_target", 00:06:17.203 "virtio_blk_create_transport", 00:06:17.203 "virtio_blk_get_transports", 00:06:17.203 "vhost_controller_set_coalescing", 00:06:17.203 "vhost_get_controllers", 00:06:17.203 "vhost_delete_controller", 00:06:17.203 "vhost_create_blk_controller", 00:06:17.203 "vhost_scsi_controller_remove_target", 00:06:17.203 "vhost_scsi_controller_add_target", 00:06:17.203 "vhost_start_scsi_controller", 00:06:17.203 "vhost_create_scsi_controller", 00:06:17.203 "thread_set_cpumask", 00:06:17.203 "framework_get_governor", 00:06:17.203 "framework_get_scheduler", 00:06:17.203 "framework_set_scheduler", 00:06:17.203 "framework_get_reactors", 00:06:17.203 "thread_get_io_channels", 00:06:17.203 "thread_get_pollers", 00:06:17.203 "thread_get_stats", 00:06:17.203 "framework_monitor_context_switch", 00:06:17.203 "spdk_kill_instance", 00:06:17.203 "log_enable_timestamps", 00:06:17.203 "log_get_flags", 00:06:17.203 "log_clear_flag", 00:06:17.203 "log_set_flag", 00:06:17.203 "log_get_level", 00:06:17.203 "log_set_level", 00:06:17.203 "log_get_print_level", 00:06:17.203 "log_set_print_level", 00:06:17.203 "framework_enable_cpumask_locks", 00:06:17.203 "framework_disable_cpumask_locks", 00:06:17.203 "framework_wait_init", 00:06:17.203 "framework_start_init", 00:06:17.203 "scsi_get_devices", 00:06:17.203 "bdev_get_histogram", 00:06:17.204 "bdev_enable_histogram", 00:06:17.204 "bdev_set_qos_limit", 00:06:17.204 "bdev_set_qd_sampling_period", 00:06:17.204 "bdev_get_bdevs", 00:06:17.204 "bdev_reset_iostat", 00:06:17.204 "bdev_get_iostat", 00:06:17.204 "bdev_examine", 00:06:17.204 "bdev_wait_for_examine", 00:06:17.204 "bdev_set_options", 00:06:17.204 "notify_get_notifications", 00:06:17.204 "notify_get_types", 00:06:17.204 "accel_get_stats", 00:06:17.204 "accel_set_options", 00:06:17.204 "accel_set_driver", 00:06:17.204 "accel_crypto_key_destroy", 00:06:17.204 "accel_crypto_keys_get", 00:06:17.204 "accel_crypto_key_create", 00:06:17.204 "accel_assign_opc", 00:06:17.204 "accel_get_module_info", 00:06:17.204 "accel_get_opc_assignments", 00:06:17.204 "vmd_rescan", 00:06:17.204 "vmd_remove_device", 00:06:17.204 "vmd_enable", 00:06:17.204 "sock_get_default_impl", 00:06:17.204 "sock_set_default_impl", 00:06:17.204 "sock_impl_set_options", 00:06:17.204 "sock_impl_get_options", 00:06:17.204 "iobuf_get_stats", 00:06:17.204 "iobuf_set_options", 00:06:17.204 "keyring_get_keys", 00:06:17.204 "framework_get_pci_devices", 00:06:17.204 "framework_get_config", 00:06:17.204 "framework_get_subsystems", 00:06:17.204 "vfu_tgt_set_base_path", 00:06:17.204 "trace_get_info", 00:06:17.204 "trace_get_tpoint_group_mask", 00:06:17.204 "trace_disable_tpoint_group", 00:06:17.204 "trace_enable_tpoint_group", 00:06:17.204 "trace_clear_tpoint_mask", 00:06:17.204 "trace_set_tpoint_mask", 00:06:17.204 "spdk_get_version", 00:06:17.204 "rpc_get_methods" 00:06:17.204 ] 00:06:17.204 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:17.204 18:38:05 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.204 18:38:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.204 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:17.204 18:38:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3468178 00:06:17.204 18:38:05 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3468178 ']' 00:06:17.204 18:38:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3468178 00:06:17.204 18:38:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3468178 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3468178' 00:06:17.462 killing process with pid 3468178 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3468178 00:06:17.462 18:38:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3468178 00:06:17.720 00:06:17.720 real 0m1.199s 00:06:17.720 user 0m2.133s 00:06:17.720 sys 0m0.439s 00:06:17.720 18:38:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.720 18:38:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.720 ************************************ 00:06:17.720 END TEST spdkcli_tcp 00:06:17.720 ************************************ 00:06:17.720 18:38:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.720 18:38:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.720 18:38:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.720 18:38:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.720 18:38:05 -- common/autotest_common.sh@10 -- # set +x 00:06:17.720 ************************************ 00:06:17.720 START TEST dpdk_mem_utility 00:06:17.720 ************************************ 00:06:17.720 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.977 * Looking for test storage... 00:06:17.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:17.977 18:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:17.977 18:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3468388 00:06:17.977 18:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.977 18:38:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3468388 00:06:17.977 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3468388 ']' 00:06:17.977 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.977 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.977 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.978 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.978 18:38:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.978 [2024-07-14 18:38:05.999544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:17.978 [2024-07-14 18:38:05.999617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468388 ] 00:06:17.978 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.978 [2024-07-14 18:38:06.060655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.978 [2024-07-14 18:38:06.150509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.236 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.236 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:18.236 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.236 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.236 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.236 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.236 { 00:06:18.236 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.236 } 00:06:18.236 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.236 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:18.494 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:18.494 1 heaps totaling size 814.000000 MiB 00:06:18.494 size: 814.000000 MiB heap id: 0 00:06:18.494 end heaps---------- 00:06:18.494 8 mempools totaling size 598.116089 MiB 00:06:18.494 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.494 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.494 size: 84.521057 MiB name: bdev_io_3468388 00:06:18.494 size: 51.011292 MiB name: evtpool_3468388 00:06:18.494 size: 50.003479 MiB name: msgpool_3468388 00:06:18.494 size: 21.763794 MiB name: PDU_Pool 00:06:18.494 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.494 size: 0.026123 MiB name: Session_Pool 00:06:18.494 end mempools------- 00:06:18.494 6 memzones totaling size 4.142822 MiB 00:06:18.494 size: 1.000366 MiB name: RG_ring_0_3468388 00:06:18.494 size: 1.000366 MiB name: RG_ring_1_3468388 00:06:18.494 size: 1.000366 MiB name: RG_ring_4_3468388 00:06:18.494 size: 1.000366 MiB name: RG_ring_5_3468388 00:06:18.494 size: 0.125366 MiB name: RG_ring_2_3468388 00:06:18.494 size: 0.015991 MiB name: RG_ring_3_3468388 00:06:18.494 end memzones------- 00:06:18.494 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.494 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:18.494 list of free elements. size: 12.519348 MiB 00:06:18.494 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:18.494 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:18.494 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:18.494 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:18.494 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:18.494 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:18.494 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:18.494 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:18.494 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:18.494 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:18.494 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:18.494 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:18.494 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:18.494 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:18.494 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:18.494 list of standard malloc elements. size: 199.218079 MiB 00:06:18.494 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:18.494 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:18.494 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:18.494 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:18.494 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:18.494 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:18.495 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:18.495 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:18.495 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:18.495 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:18.495 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:18.495 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:18.495 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:18.495 list of memzone associated elements. size: 602.262573 MiB 00:06:18.495 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:18.495 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.495 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:18.495 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.495 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:18.495 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3468388_0 00:06:18.495 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:18.495 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3468388_0 00:06:18.495 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:18.495 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3468388_0 00:06:18.495 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:18.495 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.495 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:18.495 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.495 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:18.495 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3468388 00:06:18.495 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:18.495 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3468388 00:06:18.495 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:18.495 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3468388 00:06:18.495 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:18.495 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.495 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:18.495 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.495 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:18.495 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.495 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:18.495 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.495 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:18.495 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3468388 00:06:18.495 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:18.495 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3468388 00:06:18.495 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:18.495 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3468388 00:06:18.495 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:18.495 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3468388 00:06:18.495 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:18.495 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3468388 00:06:18.495 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:18.495 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.495 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:18.495 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.495 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:18.495 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.495 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:18.495 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3468388 00:06:18.495 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:18.495 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.495 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:18.495 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.495 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:18.495 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3468388 00:06:18.495 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:18.495 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.495 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:18.495 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3468388 00:06:18.495 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:18.495 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3468388 00:06:18.495 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:18.495 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.495 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.495 18:38:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3468388 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3468388 ']' 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3468388 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3468388 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3468388' 00:06:18.495 killing process with pid 3468388 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3468388 00:06:18.495 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3468388 00:06:18.754 00:06:18.754 real 0m1.049s 00:06:18.754 user 0m1.016s 00:06:18.754 sys 0m0.395s 00:06:18.754 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.754 18:38:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.754 ************************************ 00:06:18.754 END TEST dpdk_mem_utility 00:06:18.754 ************************************ 00:06:18.754 18:38:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.754 18:38:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:18.754 18:38:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.754 18:38:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.754 18:38:06 -- common/autotest_common.sh@10 -- # set +x 00:06:19.012 ************************************ 00:06:19.012 START TEST event 00:06:19.012 ************************************ 00:06:19.012 18:38:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:19.012 * Looking for test storage... 00:06:19.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:19.012 18:38:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:19.012 18:38:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:19.012 18:38:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.012 18:38:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.012 18:38:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.012 18:38:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.012 ************************************ 00:06:19.012 START TEST event_perf 00:06:19.012 ************************************ 00:06:19.012 18:38:07 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:19.012 Running I/O for 1 seconds...[2024-07-14 18:38:07.080928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:19.012 [2024-07-14 18:38:07.080987] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468578 ] 00:06:19.012 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.012 [2024-07-14 18:38:07.137706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.012 [2024-07-14 18:38:07.228585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.012 [2024-07-14 18:38:07.228644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.012 [2024-07-14 18:38:07.228709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.012 [2024-07-14 18:38:07.228712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.390 Running I/O for 1 seconds... 00:06:20.390 lcore 0: 230344 00:06:20.390 lcore 1: 230344 00:06:20.390 lcore 2: 230344 00:06:20.390 lcore 3: 230344 00:06:20.390 done. 00:06:20.390 00:06:20.390 real 0m1.243s 00:06:20.390 user 0m4.156s 00:06:20.390 sys 0m0.084s 00:06:20.390 18:38:08 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.390 18:38:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.390 ************************************ 00:06:20.390 END TEST event_perf 00:06:20.390 ************************************ 00:06:20.390 18:38:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.390 18:38:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.390 18:38:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.390 18:38:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.390 18:38:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.390 ************************************ 00:06:20.390 START TEST event_reactor 00:06:20.390 ************************************ 00:06:20.390 18:38:08 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:20.390 [2024-07-14 18:38:08.378952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:20.390 [2024-07-14 18:38:08.379017] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468734 ] 00:06:20.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.390 [2024-07-14 18:38:08.445210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.390 [2024-07-14 18:38:08.535978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.761 test_start 00:06:21.761 oneshot 00:06:21.761 tick 100 00:06:21.761 tick 100 00:06:21.761 tick 250 00:06:21.761 tick 100 00:06:21.761 tick 100 00:06:21.761 tick 100 00:06:21.761 tick 250 00:06:21.761 tick 500 00:06:21.761 tick 100 00:06:21.761 tick 100 00:06:21.761 tick 250 00:06:21.761 tick 100 00:06:21.761 tick 100 00:06:21.761 test_end 00:06:21.761 00:06:21.761 real 0m1.255s 00:06:21.761 user 0m1.171s 00:06:21.761 sys 0m0.079s 00:06:21.761 18:38:09 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.761 18:38:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:21.761 ************************************ 00:06:21.761 END TEST event_reactor 00:06:21.761 ************************************ 00:06:21.761 18:38:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:21.761 18:38:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.761 18:38:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.761 18:38:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.761 18:38:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.761 ************************************ 00:06:21.761 START TEST event_reactor_perf 00:06:21.761 ************************************ 00:06:21.761 18:38:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:21.761 [2024-07-14 18:38:09.681862] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:21.761 [2024-07-14 18:38:09.681956] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468888 ] 00:06:21.761 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.761 [2024-07-14 18:38:09.747587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.761 [2024-07-14 18:38:09.839308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.694 test_start 00:06:22.694 test_end 00:06:22.694 Performance: 356280 events per second 00:06:22.694 00:06:22.694 real 0m1.248s 00:06:22.694 user 0m1.161s 00:06:22.694 sys 0m0.083s 00:06:22.694 18:38:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.694 18:38:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.694 ************************************ 00:06:22.694 END TEST event_reactor_perf 00:06:22.694 ************************************ 00:06:22.953 18:38:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.953 18:38:10 event -- event/event.sh@49 -- # uname -s 00:06:22.953 18:38:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:22.953 18:38:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:22.953 18:38:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.953 18:38:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.953 18:38:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.953 ************************************ 00:06:22.953 START TEST event_scheduler 00:06:22.953 ************************************ 00:06:22.953 18:38:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:22.953 * Looking for test storage... 00:06:22.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:22.953 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:22.953 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3469077 00:06:22.953 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:22.953 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.953 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3469077 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3469077 ']' 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.953 18:38:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.953 [2024-07-14 18:38:11.059904] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:22.953 [2024-07-14 18:38:11.059993] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469077 ] 00:06:22.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.953 [2024-07-14 18:38:11.119204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.211 [2024-07-14 18:38:11.206948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.211 [2024-07-14 18:38:11.207005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.211 [2024-07-14 18:38:11.207072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.211 [2024-07-14 18:38:11.207075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.211 18:38:11 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.211 18:38:11 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:23.212 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 [2024-07-14 18:38:11.267874] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:23.212 [2024-07-14 18:38:11.267935] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:23.212 [2024-07-14 18:38:11.267952] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:23.212 [2024-07-14 18:38:11.267963] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:23.212 [2024-07-14 18:38:11.267974] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 [2024-07-14 18:38:11.362485] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 ************************************ 00:06:23.212 START TEST scheduler_create_thread 00:06:23.212 ************************************ 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 2 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 3 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 4 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 5 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.212 6 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.212 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 7 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 8 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 9 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 10 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.470 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.035 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.035 00:06:24.035 real 0m0.591s 00:06:24.035 user 0m0.007s 00:06:24.035 sys 0m0.007s 00:06:24.035 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.035 18:38:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.035 ************************************ 00:06:24.035 END TEST scheduler_create_thread 00:06:24.035 ************************************ 00:06:24.035 18:38:11 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:24.035 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.035 18:38:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3469077 00:06:24.035 18:38:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3469077 ']' 00:06:24.035 18:38:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3469077 00:06:24.035 18:38:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:24.035 18:38:12 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.035 18:38:12 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3469077 00:06:24.035 18:38:12 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:24.035 18:38:12 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:24.036 18:38:12 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3469077' 00:06:24.036 killing process with pid 3469077 00:06:24.036 18:38:12 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3469077 00:06:24.036 18:38:12 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3469077 00:06:24.293 [2024-07-14 18:38:12.462540] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.551 00:06:24.551 real 0m1.707s 00:06:24.551 user 0m2.190s 00:06:24.551 sys 0m0.313s 00:06:24.551 18:38:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.551 18:38:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.552 ************************************ 00:06:24.552 END TEST event_scheduler 00:06:24.552 ************************************ 00:06:24.552 18:38:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.552 18:38:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.552 18:38:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.552 18:38:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.552 18:38:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.552 18:38:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.552 ************************************ 00:06:24.552 START TEST app_repeat 00:06:24.552 ************************************ 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3469382 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3469382' 00:06:24.552 Process app_repeat pid: 3469382 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.552 spdk_app_start Round 0 00:06:24.552 18:38:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3469382 /var/tmp/spdk-nbd.sock 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3469382 ']' 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.552 18:38:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.552 [2024-07-14 18:38:12.753473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:24.552 [2024-07-14 18:38:12.753532] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469382 ] 00:06:24.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.810 [2024-07-14 18:38:12.816494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.810 [2024-07-14 18:38:12.908305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.810 [2024-07-14 18:38:12.908310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.810 18:38:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.810 18:38:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:24.810 18:38:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.068 Malloc0 00:06:25.068 18:38:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.326 Malloc1 00:06:25.326 18:38:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.326 18:38:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.582 /dev/nbd0 00:06:25.839 18:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.839 18:38:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.839 1+0 records in 00:06:25.839 1+0 records out 00:06:25.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144017 s, 28.4 MB/s 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.839 18:38:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.839 18:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.839 18:38:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.839 18:38:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.095 /dev/nbd1 00:06:26.095 18:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.095 18:38:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.096 1+0 records in 00:06:26.096 1+0 records out 00:06:26.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431417 s, 9.5 MB/s 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.096 18:38:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.096 18:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.096 18:38:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.096 18:38:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.096 18:38:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.096 18:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.353 { 00:06:26.353 "nbd_device": "/dev/nbd0", 00:06:26.353 "bdev_name": "Malloc0" 00:06:26.353 }, 00:06:26.353 { 00:06:26.353 "nbd_device": "/dev/nbd1", 00:06:26.353 "bdev_name": "Malloc1" 00:06:26.353 } 00:06:26.353 ]' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.353 { 00:06:26.353 "nbd_device": "/dev/nbd0", 00:06:26.353 "bdev_name": "Malloc0" 00:06:26.353 }, 00:06:26.353 { 00:06:26.353 "nbd_device": "/dev/nbd1", 00:06:26.353 "bdev_name": "Malloc1" 00:06:26.353 } 00:06:26.353 ]' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.353 /dev/nbd1' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.353 /dev/nbd1' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.353 256+0 records in 00:06:26.353 256+0 records out 00:06:26.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050427 s, 208 MB/s 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.353 256+0 records in 00:06:26.353 256+0 records out 00:06:26.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238388 s, 44.0 MB/s 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.353 256+0 records in 00:06:26.353 256+0 records out 00:06:26.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222602 s, 47.1 MB/s 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.353 18:38:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.610 18:38:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.866 18:38:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.125 18:38:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.125 18:38:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.384 18:38:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.642 [2024-07-14 18:38:15.816342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.902 [2024-07-14 18:38:15.910027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.902 [2024-07-14 18:38:15.910027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.902 [2024-07-14 18:38:15.970059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.902 [2024-07-14 18:38:15.970113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.438 18:38:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.438 18:38:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:30.438 spdk_app_start Round 1 00:06:30.438 18:38:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3469382 /var/tmp/spdk-nbd.sock 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3469382 ']' 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.438 18:38:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.696 18:38:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.696 18:38:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:30.696 18:38:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.953 Malloc0 00:06:30.953 18:38:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.226 Malloc1 00:06:31.226 18:38:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.226 18:38:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.528 /dev/nbd0 00:06:31.528 18:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.528 18:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.529 1+0 records in 00:06:31.529 1+0 records out 00:06:31.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228899 s, 17.9 MB/s 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.529 18:38:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:31.529 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.529 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.529 18:38:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.787 /dev/nbd1 00:06:31.787 18:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.787 18:38:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.787 1+0 records in 00:06:31.787 1+0 records out 00:06:31.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206695 s, 19.8 MB/s 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.787 18:38:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.788 18:38:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:31.788 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.788 18:38:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.788 18:38:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.788 18:38:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.788 18:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.046 { 00:06:32.046 "nbd_device": "/dev/nbd0", 00:06:32.046 "bdev_name": "Malloc0" 00:06:32.046 }, 00:06:32.046 { 00:06:32.046 "nbd_device": "/dev/nbd1", 00:06:32.046 "bdev_name": "Malloc1" 00:06:32.046 } 00:06:32.046 ]' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.046 { 00:06:32.046 "nbd_device": "/dev/nbd0", 00:06:32.046 "bdev_name": "Malloc0" 00:06:32.046 }, 00:06:32.046 { 00:06:32.046 "nbd_device": "/dev/nbd1", 00:06:32.046 "bdev_name": "Malloc1" 00:06:32.046 } 00:06:32.046 ]' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.046 /dev/nbd1' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.046 /dev/nbd1' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.046 256+0 records in 00:06:32.046 256+0 records out 00:06:32.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503528 s, 208 MB/s 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.046 256+0 records in 00:06:32.046 256+0 records out 00:06:32.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202932 s, 51.7 MB/s 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.046 256+0 records in 00:06:32.046 256+0 records out 00:06:32.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024169 s, 43.4 MB/s 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.046 18:38:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.304 18:38:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.305 18:38:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.565 18:38:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.823 18:38:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.081 18:38:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.081 18:38:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.338 18:38:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.596 [2024-07-14 18:38:21.618064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.596 [2024-07-14 18:38:21.707412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.596 [2024-07-14 18:38:21.707416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.596 [2024-07-14 18:38:21.769500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.596 [2024-07-14 18:38:21.769581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.885 18:38:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.885 18:38:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.885 spdk_app_start Round 2 00:06:36.885 18:38:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3469382 /var/tmp/spdk-nbd.sock 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3469382 ']' 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.885 18:38:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.885 18:38:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.885 Malloc0 00:06:36.885 18:38:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.143 Malloc1 00:06:37.143 18:38:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.143 18:38:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.402 /dev/nbd0 00:06:37.402 18:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.402 18:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.402 1+0 records in 00:06:37.402 1+0 records out 00:06:37.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231767 s, 17.7 MB/s 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.402 18:38:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.402 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.402 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.402 18:38:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.659 /dev/nbd1 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.659 1+0 records in 00:06:37.659 1+0 records out 00:06:37.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187472 s, 21.8 MB/s 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.659 18:38:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.659 18:38:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.917 18:38:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.917 { 00:06:37.917 "nbd_device": "/dev/nbd0", 00:06:37.917 "bdev_name": "Malloc0" 00:06:37.917 }, 00:06:37.917 { 00:06:37.917 "nbd_device": "/dev/nbd1", 00:06:37.917 "bdev_name": "Malloc1" 00:06:37.917 } 00:06:37.917 ]' 00:06:37.917 18:38:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.917 { 00:06:37.917 "nbd_device": "/dev/nbd0", 00:06:37.917 "bdev_name": "Malloc0" 00:06:37.917 }, 00:06:37.917 { 00:06:37.917 "nbd_device": "/dev/nbd1", 00:06:37.917 "bdev_name": "Malloc1" 00:06:37.917 } 00:06:37.917 ]' 00:06:37.917 18:38:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.917 /dev/nbd1' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.917 /dev/nbd1' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.917 256+0 records in 00:06:37.917 256+0 records out 00:06:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448231 s, 234 MB/s 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.917 256+0 records in 00:06:37.917 256+0 records out 00:06:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259597 s, 40.4 MB/s 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.917 256+0 records in 00:06:37.917 256+0 records out 00:06:37.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225665 s, 46.5 MB/s 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.917 18:38:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.174 18:38:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.432 18:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.690 18:38:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.690 18:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.690 18:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.690 18:38:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.690 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.948 18:38:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.948 18:38:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.207 18:38:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.207 [2024-07-14 18:38:27.429645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.466 [2024-07-14 18:38:27.519501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.466 [2024-07-14 18:38:27.519506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.466 [2024-07-14 18:38:27.580897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.466 [2024-07-14 18:38:27.580984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.001 18:38:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3469382 /var/tmp/spdk-nbd.sock 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3469382 ']' 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.001 18:38:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:42.259 18:38:30 event.app_repeat -- event/event.sh@39 -- # killprocess 3469382 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3469382 ']' 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3469382 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:42.259 18:38:30 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.260 18:38:30 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3469382 00:06:42.518 18:38:30 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.518 18:38:30 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.518 18:38:30 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3469382' 00:06:42.518 killing process with pid 3469382 00:06:42.518 18:38:30 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3469382 00:06:42.518 18:38:30 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3469382 00:06:42.518 spdk_app_start is called in Round 0. 00:06:42.518 Shutdown signal received, stop current app iteration 00:06:42.519 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:42.519 spdk_app_start is called in Round 1. 00:06:42.519 Shutdown signal received, stop current app iteration 00:06:42.519 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:42.519 spdk_app_start is called in Round 2. 00:06:42.519 Shutdown signal received, stop current app iteration 00:06:42.519 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:42.519 spdk_app_start is called in Round 3. 00:06:42.519 Shutdown signal received, stop current app iteration 00:06:42.519 18:38:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:42.519 18:38:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:42.519 00:06:42.519 real 0m17.968s 00:06:42.519 user 0m39.130s 00:06:42.519 sys 0m3.257s 00:06:42.519 18:38:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.519 18:38:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.519 ************************************ 00:06:42.519 END TEST app_repeat 00:06:42.519 ************************************ 00:06:42.519 18:38:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.519 18:38:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:42.519 18:38:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.519 18:38:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.519 18:38:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.519 18:38:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.777 ************************************ 00:06:42.777 START TEST cpu_locks 00:06:42.777 ************************************ 00:06:42.777 18:38:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.777 * Looking for test storage... 00:06:42.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.777 18:38:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:42.777 18:38:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:42.777 18:38:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:42.777 18:38:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:42.777 18:38:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.777 18:38:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.777 18:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.777 ************************************ 00:06:42.777 START TEST default_locks 00:06:42.777 ************************************ 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3471733 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3471733 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3471733 ']' 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.777 18:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.777 [2024-07-14 18:38:30.879799] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:42.777 [2024-07-14 18:38:30.879911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471733 ] 00:06:42.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.777 [2024-07-14 18:38:30.937733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.037 [2024-07-14 18:38:31.021927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.295 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.295 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:43.295 18:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3471733 00:06:43.295 18:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3471733 00:06:43.295 18:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.552 lslocks: write error 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3471733 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3471733 ']' 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3471733 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471733 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471733' 00:06:43.552 killing process with pid 3471733 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3471733 00:06:43.552 18:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3471733 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3471733 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3471733 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3471733 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3471733 ']' 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3471733) - No such process 00:06:44.121 ERROR: process (pid: 3471733) is no longer running 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.121 00:06:44.121 real 0m1.223s 00:06:44.121 user 0m1.129s 00:06:44.121 sys 0m0.564s 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.121 18:38:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.121 ************************************ 00:06:44.121 END TEST default_locks 00:06:44.121 ************************************ 00:06:44.121 18:38:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.121 18:38:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.121 18:38:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.121 18:38:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.121 18:38:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.121 ************************************ 00:06:44.121 START TEST default_locks_via_rpc 00:06:44.121 ************************************ 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3471901 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3471901 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3471901 ']' 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.121 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.121 [2024-07-14 18:38:32.152487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:44.121 [2024-07-14 18:38:32.152576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471901 ] 00:06:44.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.121 [2024-07-14 18:38:32.210812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.121 [2024-07-14 18:38:32.298829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3471901 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3471901 00:06:44.380 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3471901 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3471901 ']' 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3471901 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471901 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471901' 00:06:44.638 killing process with pid 3471901 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3471901 00:06:44.638 18:38:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3471901 00:06:45.206 00:06:45.206 real 0m1.148s 00:06:45.206 user 0m1.071s 00:06:45.206 sys 0m0.539s 00:06:45.206 18:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.206 18:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.206 ************************************ 00:06:45.206 END TEST default_locks_via_rpc 00:06:45.206 ************************************ 00:06:45.206 18:38:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.206 18:38:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.206 18:38:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.206 18:38:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.206 18:38:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.206 ************************************ 00:06:45.206 START TEST non_locking_app_on_locked_coremask 00:06:45.206 ************************************ 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3472063 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3472063 /var/tmp/spdk.sock 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472063 ']' 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.206 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.206 [2024-07-14 18:38:33.344682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.206 [2024-07-14 18:38:33.344768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472063 ] 00:06:45.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.206 [2024-07-14 18:38:33.402418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.465 [2024-07-14 18:38:33.490970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3472066 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3472066 /var/tmp/spdk2.sock 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472066 ']' 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.723 18:38:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.723 [2024-07-14 18:38:33.792749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.723 [2024-07-14 18:38:33.792822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472066 ] 00:06:45.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.723 [2024-07-14 18:38:33.884231] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.723 [2024-07-14 18:38:33.884264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.982 [2024-07-14 18:38:34.073632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.550 18:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.550 18:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:46.550 18:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3472063 00:06:46.550 18:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3472063 00:06:46.550 18:38:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.117 lslocks: write error 00:06:47.117 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3472063 00:06:47.117 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3472063 ']' 00:06:47.117 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3472063 00:06:47.117 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472063 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472063' 00:06:47.118 killing process with pid 3472063 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3472063 00:06:47.118 18:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3472063 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3472066 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3472066 ']' 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3472066 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472066 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472066' 00:06:48.051 killing process with pid 3472066 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3472066 00:06:48.051 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3472066 00:06:48.307 00:06:48.307 real 0m3.139s 00:06:48.307 user 0m3.296s 00:06:48.307 sys 0m1.022s 00:06:48.307 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.307 18:38:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.307 ************************************ 00:06:48.307 END TEST non_locking_app_on_locked_coremask 00:06:48.307 ************************************ 00:06:48.307 18:38:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.307 18:38:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.307 18:38:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.307 18:38:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.307 18:38:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.307 ************************************ 00:06:48.307 START TEST locking_app_on_unlocked_coremask 00:06:48.307 ************************************ 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3472497 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3472497 /var/tmp/spdk.sock 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472497 ']' 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.307 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 [2024-07-14 18:38:36.539538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.566 [2024-07-14 18:38:36.539638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472497 ] 00:06:48.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.566 [2024-07-14 18:38:36.604165] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.566 [2024-07-14 18:38:36.604205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.566 [2024-07-14 18:38:36.691612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3472500 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3472500 /var/tmp/spdk2.sock 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472500 ']' 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.825 18:38:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.825 [2024-07-14 18:38:37.001027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.825 [2024-07-14 18:38:37.001125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472500 ] 00:06:48.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.082 [2024-07-14 18:38:37.096568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.083 [2024-07-14 18:38:37.279970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.018 18:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.018 18:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.018 18:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3472500 00:06:50.018 18:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3472500 00:06:50.018 18:38:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.620 lslocks: write error 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3472497 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3472497 ']' 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3472497 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472497 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472497' 00:06:50.620 killing process with pid 3472497 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3472497 00:06:50.620 18:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3472497 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3472500 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3472500 ']' 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3472500 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472500 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472500' 00:06:51.189 killing process with pid 3472500 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3472500 00:06:51.189 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3472500 00:06:51.756 00:06:51.756 real 0m3.321s 00:06:51.756 user 0m3.465s 00:06:51.756 sys 0m1.101s 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 ************************************ 00:06:51.756 END TEST locking_app_on_unlocked_coremask 00:06:51.756 ************************************ 00:06:51.756 18:38:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.756 18:38:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.756 18:38:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.756 18:38:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.756 18:38:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.756 ************************************ 00:06:51.756 START TEST locking_app_on_locked_coremask 00:06:51.756 ************************************ 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3472933 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3472933 /var/tmp/spdk.sock 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472933 ']' 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.756 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.757 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.757 18:38:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.757 [2024-07-14 18:38:39.905029] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:51.757 [2024-07-14 18:38:39.905092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472933 ] 00:06:51.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.757 [2024-07-14 18:38:39.964736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.016 [2024-07-14 18:38:40.056858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3472942 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3472942 /var/tmp/spdk2.sock 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3472942 /var/tmp/spdk2.sock 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3472942 /var/tmp/spdk2.sock 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3472942 ']' 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.274 18:38:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.274 [2024-07-14 18:38:40.373271] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:52.274 [2024-07-14 18:38:40.373358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472942 ] 00:06:52.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.274 [2024-07-14 18:38:40.470077] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3472933 has claimed it. 00:06:52.274 [2024-07-14 18:38:40.470133] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3472942) - No such process 00:06:52.841 ERROR: process (pid: 3472942) is no longer running 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3472933 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3472933 00:06:52.841 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.100 lslocks: write error 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3472933 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3472933 ']' 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3472933 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.100 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472933 00:06:53.360 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.360 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.360 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472933' 00:06:53.360 killing process with pid 3472933 00:06:53.360 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3472933 00:06:53.360 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3472933 00:06:53.619 00:06:53.619 real 0m1.901s 00:06:53.619 user 0m2.045s 00:06:53.619 sys 0m0.648s 00:06:53.619 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.619 18:38:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.619 ************************************ 00:06:53.619 END TEST locking_app_on_locked_coremask 00:06:53.619 ************************************ 00:06:53.619 18:38:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.619 18:38:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.619 18:38:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.619 18:38:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.619 18:38:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.619 ************************************ 00:06:53.619 START TEST locking_overlapped_coremask 00:06:53.619 ************************************ 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3473225 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3473225 /var/tmp/spdk.sock 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3473225 ']' 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.619 18:38:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.879 [2024-07-14 18:38:41.854529] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.879 [2024-07-14 18:38:41.854622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473225 ] 00:06:53.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.879 [2024-07-14 18:38:41.913694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.879 [2024-07-14 18:38:42.004968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.879 [2024-07-14 18:38:42.005052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.879 [2024-07-14 18:38:42.005055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3473242 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3473242 /var/tmp/spdk2.sock 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3473242 /var/tmp/spdk2.sock 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3473242 /var/tmp/spdk2.sock 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3473242 ']' 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.138 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.138 [2024-07-14 18:38:42.310721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:54.138 [2024-07-14 18:38:42.310821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473242 ] 00:06:54.138 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.398 [2024-07-14 18:38:42.401567] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3473225 has claimed it. 00:06:54.398 [2024-07-14 18:38:42.401634] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3473242) - No such process 00:06:54.968 ERROR: process (pid: 3473242) is no longer running 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3473225 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3473225 ']' 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3473225 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.968 18:38:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473225 00:06:54.968 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.968 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.968 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473225' 00:06:54.968 killing process with pid 3473225 00:06:54.968 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3473225 00:06:54.968 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3473225 00:06:55.226 00:06:55.226 real 0m1.620s 00:06:55.226 user 0m4.367s 00:06:55.226 sys 0m0.437s 00:06:55.226 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.226 18:38:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.226 ************************************ 00:06:55.226 END TEST locking_overlapped_coremask 00:06:55.226 ************************************ 00:06:55.226 18:38:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.226 18:38:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.226 18:38:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.226 18:38:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.226 18:38:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.486 ************************************ 00:06:55.486 START TEST locking_overlapped_coremask_via_rpc 00:06:55.486 ************************************ 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3473404 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3473404 /var/tmp/spdk.sock 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3473404 ']' 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.486 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.486 [2024-07-14 18:38:43.527738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.486 [2024-07-14 18:38:43.527838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473404 ] 00:06:55.486 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.486 [2024-07-14 18:38:43.590399] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.486 [2024-07-14 18:38:43.590438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.486 [2024-07-14 18:38:43.679713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.486 [2024-07-14 18:38:43.679772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.486 [2024-07-14 18:38:43.679776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3473409 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3473409 /var/tmp/spdk2.sock 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3473409 ']' 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.745 18:38:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.003 [2024-07-14 18:38:43.983376] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.004 [2024-07-14 18:38:43.983470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473409 ] 00:06:56.004 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.004 [2024-07-14 18:38:44.076320] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.004 [2024-07-14 18:38:44.076363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.263 [2024-07-14 18:38:44.251939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.263 [2024-07-14 18:38:44.251966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.263 [2024-07-14 18:38:44.251967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.829 [2024-07-14 18:38:44.940001] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3473404 has claimed it. 00:06:56.829 request: 00:06:56.829 { 00:06:56.829 "method": "framework_enable_cpumask_locks", 00:06:56.829 "req_id": 1 00:06:56.829 } 00:06:56.829 Got JSON-RPC error response 00:06:56.829 response: 00:06:56.829 { 00:06:56.829 "code": -32603, 00:06:56.829 "message": "Failed to claim CPU core: 2" 00:06:56.829 } 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3473404 /var/tmp/spdk.sock 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3473404 ']' 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.829 18:38:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3473409 /var/tmp/spdk2.sock 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3473409 ']' 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.088 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.348 00:06:57.348 real 0m1.988s 00:06:57.348 user 0m1.024s 00:06:57.348 sys 0m0.183s 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.348 18:38:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.348 ************************************ 00:06:57.348 END TEST locking_overlapped_coremask_via_rpc 00:06:57.348 ************************************ 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.348 18:38:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.348 18:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3473404 ]] 00:06:57.348 18:38:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3473404 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3473404 ']' 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3473404 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473404 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473404' 00:06:57.348 killing process with pid 3473404 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3473404 00:06:57.348 18:38:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3473404 00:06:57.918 18:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3473409 ]] 00:06:57.918 18:38:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3473409 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3473409 ']' 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3473409 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473409 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473409' 00:06:57.918 killing process with pid 3473409 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3473409 00:06:57.918 18:38:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3473409 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3473404 ]] 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3473404 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3473404 ']' 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3473404 00:06:58.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3473404) - No such process 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3473404 is not found' 00:06:58.177 Process with pid 3473404 is not found 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3473409 ]] 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3473409 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3473409 ']' 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3473409 00:06:58.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3473409) - No such process 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3473409 is not found' 00:06:58.177 Process with pid 3473409 is not found 00:06:58.177 18:38:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.177 00:06:58.177 real 0m15.614s 00:06:58.177 user 0m27.334s 00:06:58.177 sys 0m5.391s 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.177 18:38:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.177 ************************************ 00:06:58.177 END TEST cpu_locks 00:06:58.177 ************************************ 00:06:58.177 18:38:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:58.177 00:06:58.177 real 0m39.388s 00:06:58.177 user 1m15.293s 00:06:58.177 sys 0m9.428s 00:06:58.177 18:38:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.177 18:38:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.177 ************************************ 00:06:58.177 END TEST event 00:06:58.177 ************************************ 00:06:58.435 18:38:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.435 18:38:46 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.435 18:38:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.435 18:38:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.435 18:38:46 -- common/autotest_common.sh@10 -- # set +x 00:06:58.435 ************************************ 00:06:58.435 START TEST thread 00:06:58.435 ************************************ 00:06:58.435 18:38:46 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:58.435 * Looking for test storage... 00:06:58.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:58.435 18:38:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.435 18:38:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:58.435 18:38:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.435 18:38:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.435 ************************************ 00:06:58.435 START TEST thread_poller_perf 00:06:58.435 ************************************ 00:06:58.435 18:38:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.435 [2024-07-14 18:38:46.514192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.435 [2024-07-14 18:38:46.514259] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473903 ] 00:06:58.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.435 [2024-07-14 18:38:46.575812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.695 [2024-07-14 18:38:46.665185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.695 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.633 ====================================== 00:06:59.633 busy:2711948776 (cyc) 00:06:59.633 total_run_count: 292000 00:06:59.633 tsc_hz: 2700000000 (cyc) 00:06:59.633 ====================================== 00:06:59.633 poller_cost: 9287 (cyc), 3439 (nsec) 00:06:59.633 00:06:59.633 real 0m1.254s 00:06:59.633 user 0m1.177s 00:06:59.634 sys 0m0.071s 00:06:59.634 18:38:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.634 18:38:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 ************************************ 00:06:59.634 END TEST thread_poller_perf 00:06:59.634 ************************************ 00:06:59.634 18:38:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:59.634 18:38:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.634 18:38:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:59.634 18:38:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.634 18:38:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.634 ************************************ 00:06:59.634 START TEST thread_poller_perf 00:06:59.634 ************************************ 00:06:59.634 18:38:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.634 [2024-07-14 18:38:47.811394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.634 [2024-07-14 18:38:47.811446] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474055 ] 00:06:59.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.894 [2024-07-14 18:38:47.871970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.894 [2024-07-14 18:38:47.964302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.894 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.832 ====================================== 00:07:00.832 busy:2702483011 (cyc) 00:07:00.832 total_run_count: 3914000 00:07:00.832 tsc_hz: 2700000000 (cyc) 00:07:00.832 ====================================== 00:07:00.832 poller_cost: 690 (cyc), 255 (nsec) 00:07:00.832 00:07:00.832 real 0m1.247s 00:07:00.832 user 0m1.164s 00:07:00.832 sys 0m0.077s 00:07:00.832 18:38:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.832 18:38:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.832 ************************************ 00:07:00.832 END TEST thread_poller_perf 00:07:00.832 ************************************ 00:07:01.091 18:38:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:01.091 18:38:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.091 00:07:01.091 real 0m2.642s 00:07:01.091 user 0m2.391s 00:07:01.091 sys 0m0.249s 00:07:01.091 18:38:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.091 18:38:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.091 ************************************ 00:07:01.091 END TEST thread 00:07:01.091 ************************************ 00:07:01.091 18:38:49 -- common/autotest_common.sh@1142 -- # return 0 00:07:01.091 18:38:49 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:01.091 18:38:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.091 18:38:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.091 18:38:49 -- common/autotest_common.sh@10 -- # set +x 00:07:01.091 ************************************ 00:07:01.091 START TEST accel 00:07:01.091 ************************************ 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:01.091 * Looking for test storage... 00:07:01.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:01.091 18:38:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:01.091 18:38:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:01.091 18:38:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.091 18:38:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3474248 00:07:01.091 18:38:49 accel -- accel/accel.sh@63 -- # waitforlisten 3474248 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@829 -- # '[' -z 3474248 ']' 00:07:01.091 18:38:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.091 18:38:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.091 18:38:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.091 18:38:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.091 18:38:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.091 18:38:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.091 18:38:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.091 18:38:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.091 18:38:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.091 18:38:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.091 [2024-07-14 18:38:49.216334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:01.091 [2024-07-14 18:38:49.216419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474248 ] 00:07:01.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.091 [2024-07-14 18:38:49.273509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.350 [2024-07-14 18:38:49.358374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.608 18:38:49 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.608 18:38:49 accel -- common/autotest_common.sh@862 -- # return 0 00:07:01.608 18:38:49 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:01.608 18:38:49 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:01.608 18:38:49 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:01.608 18:38:49 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:01.608 18:38:49 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:01.608 18:38:49 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:01.608 18:38:49 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.608 18:38:49 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:01.608 18:38:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.608 18:38:49 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.608 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.608 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.608 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.608 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.608 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.608 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.608 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.608 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # IFS== 00:07:01.609 18:38:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:01.609 18:38:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:01.609 18:38:49 accel -- accel/accel.sh@75 -- # killprocess 3474248 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@948 -- # '[' -z 3474248 ']' 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@952 -- # kill -0 3474248 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@953 -- # uname 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3474248 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3474248' 00:07:01.609 killing process with pid 3474248 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@967 -- # kill 3474248 00:07:01.609 18:38:49 accel -- common/autotest_common.sh@972 -- # wait 3474248 00:07:01.868 18:38:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:01.868 18:38:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:01.868 18:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.868 18:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.868 18:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.868 18:38:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:01.868 18:38:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:02.128 18:38:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.128 18:38:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:02.128 18:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.128 18:38:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:02.128 18:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.128 18:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.128 18:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.128 ************************************ 00:07:02.128 START TEST accel_missing_filename 00:07:02.128 ************************************ 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.128 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.128 18:38:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.129 18:38:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:02.129 18:38:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:02.129 [2024-07-14 18:38:50.173060] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.129 [2024-07-14 18:38:50.173130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474416 ] 00:07:02.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.129 [2024-07-14 18:38:50.237342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.129 [2024-07-14 18:38:50.330292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.388 [2024-07-14 18:38:50.392276] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.388 [2024-07-14 18:38:50.477242] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:02.388 A filename is required. 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.388 00:07:02.388 real 0m0.407s 00:07:02.388 user 0m0.285s 00:07:02.388 sys 0m0.155s 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.388 18:38:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:02.388 ************************************ 00:07:02.388 END TEST accel_missing_filename 00:07:02.388 ************************************ 00:07:02.388 18:38:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.388 18:38:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.388 18:38:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:02.388 18:38:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.388 18:38:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.388 ************************************ 00:07:02.388 START TEST accel_compress_verify 00:07:02.388 ************************************ 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.388 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:02.647 18:38:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:02.647 [2024-07-14 18:38:50.630562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.647 [2024-07-14 18:38:50.630627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474443 ] 00:07:02.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.647 [2024-07-14 18:38:50.694653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.647 [2024-07-14 18:38:50.784490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.647 [2024-07-14 18:38:50.843333] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.906 [2024-07-14 18:38:50.924303] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:02.906 00:07:02.906 Compression does not support the verify option, aborting. 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.906 00:07:02.906 real 0m0.390s 00:07:02.906 user 0m0.277s 00:07:02.906 sys 0m0.146s 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.906 18:38:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:02.906 ************************************ 00:07:02.906 END TEST accel_compress_verify 00:07:02.906 ************************************ 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.906 18:38:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.906 ************************************ 00:07:02.906 START TEST accel_wrong_workload 00:07:02.906 ************************************ 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:02.906 18:38:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:02.906 Unsupported workload type: foobar 00:07:02.906 [2024-07-14 18:38:51.062570] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:02.906 accel_perf options: 00:07:02.906 [-h help message] 00:07:02.906 [-q queue depth per core] 00:07:02.906 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:02.906 [-T number of threads per core 00:07:02.906 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:02.906 [-t time in seconds] 00:07:02.906 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:02.906 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:02.906 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:02.906 [-l for compress/decompress workloads, name of uncompressed input file 00:07:02.906 [-S for crc32c workload, use this seed value (default 0) 00:07:02.906 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:02.906 [-f for fill workload, use this BYTE value (default 255) 00:07:02.906 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:02.906 [-y verify result if this switch is on] 00:07:02.906 [-a tasks to allocate per core (default: same value as -q)] 00:07:02.906 Can be used to spread operations across a wider range of memory. 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.906 00:07:02.906 real 0m0.022s 00:07:02.906 user 0m0.011s 00:07:02.906 sys 0m0.011s 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.906 18:38:51 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:02.906 ************************************ 00:07:02.906 END TEST accel_wrong_workload 00:07:02.906 ************************************ 00:07:02.906 Error: writing output failed: Broken pipe 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.906 18:38:51 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.906 18:38:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.906 ************************************ 00:07:02.907 START TEST accel_negative_buffers 00:07:02.907 ************************************ 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.907 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:02.907 18:38:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:03.166 -x option must be non-negative. 00:07:03.166 [2024-07-14 18:38:51.132327] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:03.166 accel_perf options: 00:07:03.166 [-h help message] 00:07:03.166 [-q queue depth per core] 00:07:03.166 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:03.166 [-T number of threads per core 00:07:03.166 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:03.166 [-t time in seconds] 00:07:03.166 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:03.166 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:03.166 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:03.166 [-l for compress/decompress workloads, name of uncompressed input file 00:07:03.166 [-S for crc32c workload, use this seed value (default 0) 00:07:03.166 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:03.166 [-f for fill workload, use this BYTE value (default 255) 00:07:03.167 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:03.167 [-y verify result if this switch is on] 00:07:03.167 [-a tasks to allocate per core (default: same value as -q)] 00:07:03.167 Can be used to spread operations across a wider range of memory. 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.167 00:07:03.167 real 0m0.024s 00:07:03.167 user 0m0.013s 00:07:03.167 sys 0m0.011s 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.167 18:38:51 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:03.167 ************************************ 00:07:03.167 END TEST accel_negative_buffers 00:07:03.167 ************************************ 00:07:03.167 Error: writing output failed: Broken pipe 00:07:03.167 18:38:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.167 18:38:51 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:03.167 18:38:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.167 18:38:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.167 18:38:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.167 ************************************ 00:07:03.167 START TEST accel_crc32c 00:07:03.167 ************************************ 00:07:03.167 18:38:51 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:03.167 18:38:51 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:03.167 [2024-07-14 18:38:51.194169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.167 [2024-07-14 18:38:51.194245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474629 ] 00:07:03.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.167 [2024-07-14 18:38:51.256456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.167 [2024-07-14 18:38:51.350665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.427 18:38:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:04.366 18:38:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.366 00:07:04.366 real 0m1.406s 00:07:04.366 user 0m1.263s 00:07:04.366 sys 0m0.146s 00:07:04.366 18:38:52 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.366 18:38:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:04.366 ************************************ 00:07:04.366 END TEST accel_crc32c 00:07:04.366 ************************************ 00:07:04.626 18:38:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.626 18:38:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:04.626 18:38:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:04.626 18:38:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.626 18:38:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.626 ************************************ 00:07:04.626 START TEST accel_crc32c_C2 00:07:04.626 ************************************ 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.626 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:04.626 [2024-07-14 18:38:52.652407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:04.626 [2024-07-14 18:38:52.652471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474786 ] 00:07:04.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.626 [2024-07-14 18:38:52.714455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.626 [2024-07-14 18:38:52.807166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.885 18:38:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.841 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.842 00:07:05.842 real 0m1.407s 00:07:05.842 user 0m1.260s 00:07:05.842 sys 0m0.149s 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.842 18:38:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.842 ************************************ 00:07:05.842 END TEST accel_crc32c_C2 00:07:05.842 ************************************ 00:07:05.842 18:38:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.842 18:38:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:05.842 18:38:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.842 18:38:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.842 18:38:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.100 ************************************ 00:07:06.100 START TEST accel_copy 00:07:06.100 ************************************ 00:07:06.100 18:38:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.100 [2024-07-14 18:38:54.103568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.100 [2024-07-14 18:38:54.103633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474939 ] 00:07:06.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.100 [2024-07-14 18:38:54.166377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.100 [2024-07-14 18:38:54.257571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.100 18:38:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:07.479 18:38:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.479 00:07:07.479 real 0m1.401s 00:07:07.479 user 0m1.258s 00:07:07.479 sys 0m0.145s 00:07:07.479 18:38:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.479 18:38:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.479 ************************************ 00:07:07.479 END TEST accel_copy 00:07:07.479 ************************************ 00:07:07.479 18:38:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.479 18:38:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.479 18:38:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:07.479 18:38:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.479 18:38:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.479 ************************************ 00:07:07.479 START TEST accel_fill 00:07:07.479 ************************************ 00:07:07.479 18:38:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:07.479 18:38:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:07.479 [2024-07-14 18:38:55.550554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:07.479 [2024-07-14 18:38:55.550620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475218 ] 00:07:07.479 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.479 [2024-07-14 18:38:55.614702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.742 [2024-07-14 18:38:55.709077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.742 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.742 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.742 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.742 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.742 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:07.743 18:38:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:08.715 18:38:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.715 00:07:08.715 real 0m1.403s 00:07:08.715 user 0m1.254s 00:07:08.715 sys 0m0.151s 00:07:08.715 18:38:56 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.715 18:38:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:08.715 ************************************ 00:07:08.715 END TEST accel_fill 00:07:08.715 ************************************ 00:07:08.974 18:38:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.974 18:38:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:08.974 18:38:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.974 18:38:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.974 18:38:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.974 ************************************ 00:07:08.974 START TEST accel_copy_crc32c 00:07:08.974 ************************************ 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:08.974 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:08.975 18:38:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:08.975 [2024-07-14 18:38:56.997843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:08.975 [2024-07-14 18:38:56.997933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475374 ] 00:07:08.975 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.975 [2024-07-14 18:38:57.060067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.975 [2024-07-14 18:38:57.153478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.235 18:38:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.174 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:10.174 18:38:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.174 00:07:10.174 real 0m1.412s 00:07:10.174 user 0m1.269s 00:07:10.174 sys 0m0.145s 00:07:10.174 18:38:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.174 18:38:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:10.174 ************************************ 00:07:10.174 END TEST accel_copy_crc32c 00:07:10.174 ************************************ 00:07:10.433 18:38:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.433 18:38:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.433 18:38:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.433 18:38:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.433 18:38:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.433 ************************************ 00:07:10.433 START TEST accel_copy_crc32c_C2 00:07:10.433 ************************************ 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:10.433 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:10.433 [2024-07-14 18:38:58.456267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:10.433 [2024-07-14 18:38:58.456333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475533 ] 00:07:10.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.433 [2024-07-14 18:38:58.518970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.433 [2024-07-14 18:38:58.611656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.691 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.692 18:38:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.629 00:07:11.629 real 0m1.409s 00:07:11.629 user 0m1.268s 00:07:11.629 sys 0m0.144s 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.629 18:38:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:11.629 ************************************ 00:07:11.629 END TEST accel_copy_crc32c_C2 00:07:11.629 ************************************ 00:07:11.888 18:38:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.888 18:38:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:11.888 18:38:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.888 18:38:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.888 18:38:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.888 ************************************ 00:07:11.888 START TEST accel_dualcast 00:07:11.888 ************************************ 00:07:11.888 18:38:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:11.888 18:38:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:11.888 [2024-07-14 18:38:59.915014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:11.888 [2024-07-14 18:38:59.915078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475745 ] 00:07:11.888 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.888 [2024-07-14 18:38:59.978850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.888 [2024-07-14 18:39:00.081048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:12.146 18:39:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:13.083 18:39:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.083 00:07:13.083 real 0m1.409s 00:07:13.083 user 0m1.258s 00:07:13.083 sys 0m0.152s 00:07:13.083 18:39:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.083 18:39:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:13.083 ************************************ 00:07:13.083 END TEST accel_dualcast 00:07:13.083 ************************************ 00:07:13.341 18:39:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.341 18:39:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:13.341 18:39:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.341 18:39:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.341 18:39:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.341 ************************************ 00:07:13.341 START TEST accel_compare 00:07:13.341 ************************************ 00:07:13.341 18:39:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:13.341 18:39:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:13.341 [2024-07-14 18:39:01.365813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:13.341 [2024-07-14 18:39:01.365893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476047 ] 00:07:13.341 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.341 [2024-07-14 18:39:01.428462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.341 [2024-07-14 18:39:01.520114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.600 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:13.601 18:39:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.537 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:14.538 18:39:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.538 00:07:14.538 real 0m1.408s 00:07:14.538 user 0m1.271s 00:07:14.538 sys 0m0.138s 00:07:14.538 18:39:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.538 18:39:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:14.538 ************************************ 00:07:14.538 END TEST accel_compare 00:07:14.538 ************************************ 00:07:14.797 18:39:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.797 18:39:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:14.797 18:39:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:14.797 18:39:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.797 18:39:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.797 ************************************ 00:07:14.797 START TEST accel_xor 00:07:14.797 ************************************ 00:07:14.797 18:39:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:14.797 18:39:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:14.797 [2024-07-14 18:39:02.815574] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:14.797 [2024-07-14 18:39:02.815638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476232 ] 00:07:14.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.797 [2024-07-14 18:39:02.878920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.797 [2024-07-14 18:39:02.971650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.056 18:39:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.991 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:15.992 18:39:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.992 00:07:15.992 real 0m1.404s 00:07:15.992 user 0m1.258s 00:07:15.992 sys 0m0.148s 00:07:15.992 18:39:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.992 18:39:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:15.992 ************************************ 00:07:15.992 END TEST accel_xor 00:07:15.992 ************************************ 00:07:16.250 18:39:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.250 18:39:04 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:16.250 18:39:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.250 18:39:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.250 18:39:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.250 ************************************ 00:07:16.250 START TEST accel_xor 00:07:16.250 ************************************ 00:07:16.250 18:39:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.250 18:39:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.250 [2024-07-14 18:39:04.261299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:16.250 [2024-07-14 18:39:04.261367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476399 ] 00:07:16.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.250 [2024-07-14 18:39:04.323886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.250 [2024-07-14 18:39:04.421869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.508 18:39:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.447 18:39:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.447 00:07:17.447 real 0m1.403s 00:07:17.447 user 0m1.255s 00:07:17.447 sys 0m0.149s 00:07:17.447 18:39:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.447 18:39:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:17.447 ************************************ 00:07:17.447 END TEST accel_xor 00:07:17.447 ************************************ 00:07:17.447 18:39:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.447 18:39:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:17.447 18:39:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:17.447 18:39:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.447 18:39:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.706 ************************************ 00:07:17.706 START TEST accel_dif_verify 00:07:17.706 ************************************ 00:07:17.706 18:39:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:17.706 [2024-07-14 18:39:05.709558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:17.706 [2024-07-14 18:39:05.709625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476671 ] 00:07:17.706 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.706 [2024-07-14 18:39:05.771544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.706 [2024-07-14 18:39:05.863221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:17.706 18:39:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:19.087 18:39:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.087 00:07:19.087 real 0m1.392s 00:07:19.087 user 0m1.257s 00:07:19.087 sys 0m0.138s 00:07:19.087 18:39:07 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.087 18:39:07 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.087 ************************************ 00:07:19.087 END TEST accel_dif_verify 00:07:19.087 ************************************ 00:07:19.087 18:39:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.087 18:39:07 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:19.087 18:39:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:19.087 18:39:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.087 18:39:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.087 ************************************ 00:07:19.087 START TEST accel_dif_generate 00:07:19.087 ************************************ 00:07:19.088 18:39:07 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:19.088 18:39:07 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:19.088 [2024-07-14 18:39:07.148543] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:19.088 [2024-07-14 18:39:07.148608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476834 ] 00:07:19.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.088 [2024-07-14 18:39:07.213630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.088 [2024-07-14 18:39:07.307711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:19.348 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:19.349 18:39:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:20.737 18:39:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.737 00:07:20.737 real 0m1.393s 00:07:20.737 user 0m1.250s 00:07:20.737 sys 0m0.146s 00:07:20.737 18:39:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.737 18:39:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:20.737 ************************************ 00:07:20.737 END TEST accel_dif_generate 00:07:20.737 ************************************ 00:07:20.737 18:39:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.737 18:39:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:20.737 18:39:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:20.737 18:39:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.737 18:39:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.737 ************************************ 00:07:20.737 START TEST accel_dif_generate_copy 00:07:20.737 ************************************ 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.737 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:20.738 [2024-07-14 18:39:08.590828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:20.738 [2024-07-14 18:39:08.590905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477410 ] 00:07:20.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.738 [2024-07-14 18:39:08.654787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.738 [2024-07-14 18:39:08.750022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:20.738 18:39:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.118 00:07:22.118 real 0m1.417s 00:07:22.118 user 0m1.262s 00:07:22.118 sys 0m0.157s 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.118 18:39:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.118 ************************************ 00:07:22.118 END TEST accel_dif_generate_copy 00:07:22.118 ************************************ 00:07:22.118 18:39:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.118 18:39:10 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:22.118 18:39:10 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.118 18:39:10 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.118 18:39:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.118 18:39:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.118 ************************************ 00:07:22.118 START TEST accel_comp 00:07:22.118 ************************************ 00:07:22.118 18:39:10 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:22.118 [2024-07-14 18:39:10.055520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:22.118 [2024-07-14 18:39:10.055601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477685 ] 00:07:22.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.118 [2024-07-14 18:39:10.119926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.118 [2024-07-14 18:39:10.213388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.118 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.119 18:39:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:23.498 18:39:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.498 00:07:23.498 real 0m1.417s 00:07:23.498 user 0m1.270s 00:07:23.498 sys 0m0.150s 00:07:23.498 18:39:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.498 18:39:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:23.498 ************************************ 00:07:23.498 END TEST accel_comp 00:07:23.498 ************************************ 00:07:23.498 18:39:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.498 18:39:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.498 18:39:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.498 18:39:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.498 18:39:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.498 ************************************ 00:07:23.498 START TEST accel_decomp 00:07:23.498 ************************************ 00:07:23.498 18:39:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.498 18:39:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.499 18:39:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.499 18:39:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.499 18:39:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.499 18:39:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:23.499 [2024-07-14 18:39:11.517284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:23.499 [2024-07-14 18:39:11.517350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477919 ] 00:07:23.499 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.499 [2024-07-14 18:39:11.580724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.499 [2024-07-14 18:39:11.670484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.759 18:39:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.693 18:39:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.693 00:07:24.693 real 0m1.399s 00:07:24.693 user 0m1.268s 00:07:24.693 sys 0m0.133s 00:07:24.693 18:39:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.693 18:39:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:24.693 ************************************ 00:07:24.693 END TEST accel_decomp 00:07:24.693 ************************************ 00:07:24.693 18:39:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.693 18:39:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.952 18:39:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:24.952 18:39:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.952 18:39:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.952 ************************************ 00:07:24.952 START TEST accel_decomp_full 00:07:24.952 ************************************ 00:07:24.952 18:39:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:24.952 18:39:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:24.952 [2024-07-14 18:39:12.958361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:24.952 [2024-07-14 18:39:12.958427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478085 ] 00:07:24.952 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.952 [2024-07-14 18:39:13.021293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.952 [2024-07-14 18:39:13.114111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:25.210 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.211 18:39:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.147 18:39:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.147 00:07:26.147 real 0m1.428s 00:07:26.147 user 0m1.282s 00:07:26.147 sys 0m0.149s 00:07:26.147 18:39:14 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.147 18:39:14 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:26.147 ************************************ 00:07:26.147 END TEST accel_decomp_full 00:07:26.147 ************************************ 00:07:26.405 18:39:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.405 18:39:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.405 18:39:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:26.405 18:39:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.405 18:39:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.405 ************************************ 00:07:26.405 START TEST accel_decomp_mcore 00:07:26.405 ************************************ 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.405 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.405 [2024-07-14 18:39:14.427691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:26.405 [2024-07-14 18:39:14.427755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478239 ] 00:07:26.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.405 [2024-07-14 18:39:14.490009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.405 [2024-07-14 18:39:14.586238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.405 [2024-07-14 18:39:14.586293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.405 [2024-07-14 18:39:14.586421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.405 [2024-07-14 18:39:14.586424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.677 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.678 18:39:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.621 00:07:27.621 real 0m1.405s 00:07:27.621 user 0m4.688s 00:07:27.621 sys 0m0.143s 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.621 18:39:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:27.621 ************************************ 00:07:27.621 END TEST accel_decomp_mcore 00:07:27.621 ************************************ 00:07:27.621 18:39:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.621 18:39:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.621 18:39:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.621 18:39:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.621 18:39:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.881 ************************************ 00:07:27.881 START TEST accel_decomp_full_mcore 00:07:27.881 ************************************ 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.881 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.882 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.882 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.882 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.882 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:27.882 18:39:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:27.882 [2024-07-14 18:39:15.876000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:27.882 [2024-07-14 18:39:15.876059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478518 ] 00:07:27.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.882 [2024-07-14 18:39:15.938069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.882 [2024-07-14 18:39:16.034222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.882 [2024-07-14 18:39:16.034276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.882 [2024-07-14 18:39:16.034394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.882 [2024-07-14 18:39:16.034397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.882 18:39:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.257 18:39:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.257 00:07:29.257 real 0m1.428s 00:07:29.257 user 0m4.773s 00:07:29.257 sys 0m0.151s 00:07:29.258 18:39:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.258 18:39:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.258 ************************************ 00:07:29.258 END TEST accel_decomp_full_mcore 00:07:29.258 ************************************ 00:07:29.258 18:39:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.258 18:39:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.258 18:39:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:29.258 18:39:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.258 18:39:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.258 ************************************ 00:07:29.258 START TEST accel_decomp_mthread 00:07:29.258 ************************************ 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:29.258 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:29.258 [2024-07-14 18:39:17.350971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:29.258 [2024-07-14 18:39:17.351033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478676 ] 00:07:29.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.258 [2024-07-14 18:39:17.413069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.517 [2024-07-14 18:39:17.505127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.517 18:39:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.917 00:07:30.917 real 0m1.405s 00:07:30.917 user 0m1.256s 00:07:30.917 sys 0m0.151s 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.917 18:39:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:30.917 ************************************ 00:07:30.917 END TEST accel_decomp_mthread 00:07:30.917 ************************************ 00:07:30.917 18:39:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.917 18:39:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.917 18:39:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:30.917 18:39:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.917 18:39:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.917 ************************************ 00:07:30.917 START TEST accel_decomp_full_mthread 00:07:30.917 ************************************ 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:30.917 18:39:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:30.917 [2024-07-14 18:39:18.803161] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:30.917 [2024-07-14 18:39:18.803249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478841 ] 00:07:30.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.917 [2024-07-14 18:39:18.865143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.917 [2024-07-14 18:39:18.957623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.917 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.918 18:39:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.299 00:07:32.299 real 0m1.443s 00:07:32.299 user 0m1.305s 00:07:32.299 sys 0m0.142s 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.299 18:39:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:32.299 ************************************ 00:07:32.299 END TEST accel_decomp_full_mthread 00:07:32.299 ************************************ 00:07:32.299 18:39:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.299 18:39:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:32.299 18:39:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:32.299 18:39:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:32.299 18:39:20 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.299 18:39:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.299 18:39:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.300 18:39:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.300 18:39:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.300 18:39:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.300 18:39:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.300 18:39:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.300 18:39:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:32.300 18:39:20 accel -- accel/accel.sh@41 -- # jq -r . 00:07:32.300 ************************************ 00:07:32.300 START TEST accel_dif_functional_tests 00:07:32.300 ************************************ 00:07:32.300 18:39:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:32.300 [2024-07-14 18:39:20.315456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:32.300 [2024-07-14 18:39:20.315515] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479046 ] 00:07:32.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.300 [2024-07-14 18:39:20.377821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.300 [2024-07-14 18:39:20.473607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.300 [2024-07-14 18:39:20.473657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.300 [2024-07-14 18:39:20.473675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.558 00:07:32.558 00:07:32.558 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.558 http://cunit.sourceforge.net/ 00:07:32.558 00:07:32.558 00:07:32.558 Suite: accel_dif 00:07:32.558 Test: verify: DIF generated, GUARD check ...passed 00:07:32.558 Test: verify: DIF generated, APPTAG check ...passed 00:07:32.558 Test: verify: DIF generated, REFTAG check ...passed 00:07:32.558 Test: verify: DIF not generated, GUARD check ...[2024-07-14 18:39:20.568740] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.558 passed 00:07:32.558 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 18:39:20.568807] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.558 passed 00:07:32.558 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 18:39:20.568838] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.558 passed 00:07:32.558 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:32.558 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 18:39:20.568920] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:32.558 passed 00:07:32.558 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:32.558 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:32.558 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:32.558 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 18:39:20.569078] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:32.558 passed 00:07:32.558 Test: verify copy: DIF generated, GUARD check ...passed 00:07:32.558 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:32.558 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:32.558 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 18:39:20.569259] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.558 passed 00:07:32.558 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 18:39:20.569297] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.559 passed 00:07:32.559 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 18:39:20.569330] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.559 passed 00:07:32.559 Test: generate copy: DIF generated, GUARD check ...passed 00:07:32.559 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:32.559 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:32.559 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:32.559 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:32.559 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:32.559 Test: generate copy: iovecs-len validate ...[2024-07-14 18:39:20.569548] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:32.559 passed 00:07:32.559 Test: generate copy: buffer alignment validate ...passed 00:07:32.559 00:07:32.559 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.559 suites 1 1 n/a 0 0 00:07:32.559 tests 26 26 26 0 0 00:07:32.559 asserts 115 115 115 0 n/a 00:07:32.559 00:07:32.559 Elapsed time = 0.003 seconds 00:07:32.559 00:07:32.559 real 0m0.505s 00:07:32.559 user 0m0.785s 00:07:32.559 sys 0m0.184s 00:07:32.559 18:39:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.559 18:39:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:32.559 ************************************ 00:07:32.559 END TEST accel_dif_functional_tests 00:07:32.559 ************************************ 00:07:32.817 18:39:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.817 00:07:32.817 real 0m31.689s 00:07:32.817 user 0m35.069s 00:07:32.817 sys 0m4.599s 00:07:32.817 18:39:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.817 18:39:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.817 ************************************ 00:07:32.817 END TEST accel 00:07:32.817 ************************************ 00:07:32.817 18:39:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.817 18:39:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:32.817 18:39:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.817 18:39:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.817 18:39:20 -- common/autotest_common.sh@10 -- # set +x 00:07:32.817 ************************************ 00:07:32.817 START TEST accel_rpc 00:07:32.817 ************************************ 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:32.817 * Looking for test storage... 00:07:32.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:32.817 18:39:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.817 18:39:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3479180 00:07:32.817 18:39:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:32.817 18:39:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3479180 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3479180 ']' 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.817 18:39:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.817 [2024-07-14 18:39:20.946608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:32.817 [2024-07-14 18:39:20.946704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479180 ] 00:07:32.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.817 [2024-07-14 18:39:21.005126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.076 [2024-07-14 18:39:21.090338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.076 18:39:21 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.076 18:39:21 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:33.076 18:39:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:33.076 18:39:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:33.076 18:39:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:33.076 18:39:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:33.076 18:39:21 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:33.076 18:39:21 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.076 18:39:21 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.076 18:39:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.076 ************************************ 00:07:33.077 START TEST accel_assign_opcode 00:07:33.077 ************************************ 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.077 [2024-07-14 18:39:21.183017] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.077 [2024-07-14 18:39:21.191031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.077 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.336 software 00:07:33.336 00:07:33.336 real 0m0.292s 00:07:33.336 user 0m0.043s 00:07:33.336 sys 0m0.003s 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.336 18:39:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:33.336 ************************************ 00:07:33.336 END TEST accel_assign_opcode 00:07:33.336 ************************************ 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:33.336 18:39:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3479180 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3479180 ']' 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3479180 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3479180 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3479180' 00:07:33.336 killing process with pid 3479180 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 3479180 00:07:33.336 18:39:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 3479180 00:07:33.904 00:07:33.904 real 0m1.093s 00:07:33.904 user 0m1.029s 00:07:33.904 sys 0m0.425s 00:07:33.904 18:39:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.904 18:39:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.904 ************************************ 00:07:33.904 END TEST accel_rpc 00:07:33.904 ************************************ 00:07:33.904 18:39:21 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.904 18:39:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.904 18:39:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.904 18:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.904 18:39:21 -- common/autotest_common.sh@10 -- # set +x 00:07:33.904 ************************************ 00:07:33.904 START TEST app_cmdline 00:07:33.904 ************************************ 00:07:33.904 18:39:21 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:33.904 * Looking for test storage... 00:07:33.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:33.904 18:39:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:33.904 18:39:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3479386 00:07:33.904 18:39:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:33.904 18:39:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3479386 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3479386 ']' 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.904 18:39:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.904 [2024-07-14 18:39:22.089186] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:33.904 [2024-07-14 18:39:22.089285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479386 ] 00:07:33.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.163 [2024-07-14 18:39:22.154026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.163 [2024-07-14 18:39:22.246437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.422 18:39:22 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.422 18:39:22 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:34.422 18:39:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:34.680 { 00:07:34.680 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:34.680 "fields": { 00:07:34.680 "major": 24, 00:07:34.680 "minor": 9, 00:07:34.680 "patch": 0, 00:07:34.680 "suffix": "-pre", 00:07:34.680 "commit": "719d03c6a" 00:07:34.680 } 00:07:34.680 } 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:34.680 18:39:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:34.680 18:39:22 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:34.938 request: 00:07:34.938 { 00:07:34.938 "method": "env_dpdk_get_mem_stats", 00:07:34.938 "req_id": 1 00:07:34.938 } 00:07:34.938 Got JSON-RPC error response 00:07:34.938 response: 00:07:34.938 { 00:07:34.938 "code": -32601, 00:07:34.938 "message": "Method not found" 00:07:34.938 } 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.938 18:39:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3479386 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3479386 ']' 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3479386 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3479386 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3479386' 00:07:34.938 killing process with pid 3479386 00:07:34.938 18:39:23 app_cmdline -- common/autotest_common.sh@967 -- # kill 3479386 00:07:34.939 18:39:23 app_cmdline -- common/autotest_common.sh@972 -- # wait 3479386 00:07:35.519 00:07:35.519 real 0m1.463s 00:07:35.519 user 0m1.795s 00:07:35.519 sys 0m0.477s 00:07:35.519 18:39:23 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.519 18:39:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.519 ************************************ 00:07:35.519 END TEST app_cmdline 00:07:35.519 ************************************ 00:07:35.519 18:39:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.519 18:39:23 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:35.519 18:39:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.519 18:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.519 18:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.519 ************************************ 00:07:35.519 START TEST version 00:07:35.519 ************************************ 00:07:35.519 18:39:23 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:35.519 * Looking for test storage... 00:07:35.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:35.519 18:39:23 version -- app/version.sh@17 -- # get_header_version major 00:07:35.519 18:39:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # cut -f2 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.519 18:39:23 version -- app/version.sh@17 -- # major=24 00:07:35.519 18:39:23 version -- app/version.sh@18 -- # get_header_version minor 00:07:35.519 18:39:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # cut -f2 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.519 18:39:23 version -- app/version.sh@18 -- # minor=9 00:07:35.519 18:39:23 version -- app/version.sh@19 -- # get_header_version patch 00:07:35.519 18:39:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # cut -f2 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.519 18:39:23 version -- app/version.sh@19 -- # patch=0 00:07:35.519 18:39:23 version -- app/version.sh@20 -- # get_header_version suffix 00:07:35.519 18:39:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # cut -f2 00:07:35.519 18:39:23 version -- app/version.sh@14 -- # tr -d '"' 00:07:35.519 18:39:23 version -- app/version.sh@20 -- # suffix=-pre 00:07:35.519 18:39:23 version -- app/version.sh@22 -- # version=24.9 00:07:35.519 18:39:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:35.519 18:39:23 version -- app/version.sh@28 -- # version=24.9rc0 00:07:35.519 18:39:23 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:35.519 18:39:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:35.519 18:39:23 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:35.519 18:39:23 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:35.519 00:07:35.519 real 0m0.111s 00:07:35.519 user 0m0.067s 00:07:35.519 sys 0m0.065s 00:07:35.519 18:39:23 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.519 18:39:23 version -- common/autotest_common.sh@10 -- # set +x 00:07:35.519 ************************************ 00:07:35.519 END TEST version 00:07:35.519 ************************************ 00:07:35.519 18:39:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.519 18:39:23 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@198 -- # uname -s 00:07:35.519 18:39:23 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:35.519 18:39:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:35.519 18:39:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:35.519 18:39:23 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:35.519 18:39:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.519 18:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.519 18:39:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:35.519 18:39:23 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:35.519 18:39:23 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:35.519 18:39:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.519 18:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.519 18:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:35.519 ************************************ 00:07:35.519 START TEST nvmf_tcp 00:07:35.519 ************************************ 00:07:35.519 18:39:23 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:35.519 * Looking for test storage... 00:07:35.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.519 18:39:23 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.520 18:39:23 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.520 18:39:23 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.520 18:39:23 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.520 18:39:23 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.520 18:39:23 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.520 18:39:23 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.520 18:39:23 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:35.520 18:39:23 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:35.520 18:39:23 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.520 18:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:35.520 18:39:23 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:35.520 18:39:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.520 18:39:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.520 18:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 ************************************ 00:07:35.777 START TEST nvmf_example 00:07:35.777 ************************************ 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:35.777 * Looking for test storage... 00:07:35.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.777 18:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.778 18:39:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.682 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.683 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.683 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.683 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.683 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:07:37.943 00:07:37.943 --- 10.0.0.2 ping statistics --- 00:07:37.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.943 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:07:37.943 00:07:37.943 --- 10.0.0.1 ping statistics --- 00:07:37.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.943 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3481369 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3481369 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3481369 ']' 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.943 18:39:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.880 18:39:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:38.880 18:39:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:38.880 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.087 Initializing NVMe Controllers 00:07:51.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:51.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:51.087 Initialization complete. Launching workers. 00:07:51.087 ======================================================== 00:07:51.087 Latency(us) 00:07:51.087 Device Information : IOPS MiB/s Average min max 00:07:51.087 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15021.70 58.68 4261.66 887.72 20096.84 00:07:51.087 ======================================================== 00:07:51.087 Total : 15021.70 58.68 4261.66 887.72 20096.84 00:07:51.087 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.087 rmmod nvme_tcp 00:07:51.087 rmmod nvme_fabrics 00:07:51.087 rmmod nvme_keyring 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3481369 ']' 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3481369 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3481369 ']' 00:07:51.087 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3481369 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3481369 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3481369' 00:07:51.088 killing process with pid 3481369 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3481369 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3481369 00:07:51.088 nvmf threads initialize successfully 00:07:51.088 bdev subsystem init successfully 00:07:51.088 created a nvmf target service 00:07:51.088 create targets's poll groups done 00:07:51.088 all subsystems of target started 00:07:51.088 nvmf target is running 00:07:51.088 all subsystems of target stopped 00:07:51.088 destroy targets's poll groups done 00:07:51.088 destroyed the nvmf target service 00:07:51.088 bdev subsystem finish successfully 00:07:51.088 nvmf threads destroy successfully 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.088 18:39:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.656 00:07:51.656 real 0m15.962s 00:07:51.656 user 0m45.529s 00:07:51.656 sys 0m3.257s 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.656 18:39:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:51.656 ************************************ 00:07:51.656 END TEST nvmf_example 00:07:51.656 ************************************ 00:07:51.656 18:39:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.656 18:39:39 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:51.656 18:39:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.656 18:39:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.656 18:39:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.656 ************************************ 00:07:51.656 START TEST nvmf_filesystem 00:07:51.656 ************************************ 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:51.656 * Looking for test storage... 00:07:51.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:51.656 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:51.657 #define SPDK_CONFIG_H 00:07:51.657 #define SPDK_CONFIG_APPS 1 00:07:51.657 #define SPDK_CONFIG_ARCH native 00:07:51.657 #undef SPDK_CONFIG_ASAN 00:07:51.657 #undef SPDK_CONFIG_AVAHI 00:07:51.657 #undef SPDK_CONFIG_CET 00:07:51.657 #define SPDK_CONFIG_COVERAGE 1 00:07:51.657 #define SPDK_CONFIG_CROSS_PREFIX 00:07:51.657 #undef SPDK_CONFIG_CRYPTO 00:07:51.657 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:51.657 #undef SPDK_CONFIG_CUSTOMOCF 00:07:51.657 #undef SPDK_CONFIG_DAOS 00:07:51.657 #define SPDK_CONFIG_DAOS_DIR 00:07:51.657 #define SPDK_CONFIG_DEBUG 1 00:07:51.657 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:51.657 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:51.657 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:51.657 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:51.657 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:51.657 #undef SPDK_CONFIG_DPDK_UADK 00:07:51.657 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:51.657 #define SPDK_CONFIG_EXAMPLES 1 00:07:51.657 #undef SPDK_CONFIG_FC 00:07:51.657 #define SPDK_CONFIG_FC_PATH 00:07:51.657 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:51.657 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:51.657 #undef SPDK_CONFIG_FUSE 00:07:51.657 #undef SPDK_CONFIG_FUZZER 00:07:51.657 #define SPDK_CONFIG_FUZZER_LIB 00:07:51.657 #undef SPDK_CONFIG_GOLANG 00:07:51.657 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:51.657 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:51.657 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:51.657 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:51.657 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:51.657 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:51.657 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:51.657 #define SPDK_CONFIG_IDXD 1 00:07:51.657 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:51.657 #undef SPDK_CONFIG_IPSEC_MB 00:07:51.657 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:51.657 #define SPDK_CONFIG_ISAL 1 00:07:51.657 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:51.657 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:51.657 #define SPDK_CONFIG_LIBDIR 00:07:51.657 #undef SPDK_CONFIG_LTO 00:07:51.657 #define SPDK_CONFIG_MAX_LCORES 128 00:07:51.657 #define SPDK_CONFIG_NVME_CUSE 1 00:07:51.657 #undef SPDK_CONFIG_OCF 00:07:51.657 #define SPDK_CONFIG_OCF_PATH 00:07:51.657 #define SPDK_CONFIG_OPENSSL_PATH 00:07:51.657 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:51.657 #define SPDK_CONFIG_PGO_DIR 00:07:51.657 #undef SPDK_CONFIG_PGO_USE 00:07:51.657 #define SPDK_CONFIG_PREFIX /usr/local 00:07:51.657 #undef SPDK_CONFIG_RAID5F 00:07:51.657 #undef SPDK_CONFIG_RBD 00:07:51.657 #define SPDK_CONFIG_RDMA 1 00:07:51.657 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:51.657 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:51.657 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:51.657 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:51.657 #define SPDK_CONFIG_SHARED 1 00:07:51.657 #undef SPDK_CONFIG_SMA 00:07:51.657 #define SPDK_CONFIG_TESTS 1 00:07:51.657 #undef SPDK_CONFIG_TSAN 00:07:51.657 #define SPDK_CONFIG_UBLK 1 00:07:51.657 #define SPDK_CONFIG_UBSAN 1 00:07:51.657 #undef SPDK_CONFIG_UNIT_TESTS 00:07:51.657 #undef SPDK_CONFIG_URING 00:07:51.657 #define SPDK_CONFIG_URING_PATH 00:07:51.657 #undef SPDK_CONFIG_URING_ZNS 00:07:51.657 #undef SPDK_CONFIG_USDT 00:07:51.657 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:51.657 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:51.657 #define SPDK_CONFIG_VFIO_USER 1 00:07:51.657 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:51.657 #define SPDK_CONFIG_VHOST 1 00:07:51.657 #define SPDK_CONFIG_VIRTIO 1 00:07:51.657 #undef SPDK_CONFIG_VTUNE 00:07:51.657 #define SPDK_CONFIG_VTUNE_DIR 00:07:51.657 #define SPDK_CONFIG_WERROR 1 00:07:51.657 #define SPDK_CONFIG_WPDK_DIR 00:07:51.657 #undef SPDK_CONFIG_XNVME 00:07:51.657 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:51.657 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:51.658 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3483109 ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3483109 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.J2NHoA 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.J2NHoA/tests/target /tmp/spdk.J2NHoA 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=52943142912 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9051549696 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996914176 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=434176 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:51.919 * Looking for test storage... 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=52943142912 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11266142208 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.919 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.920 18:39:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:53.839 18:39:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.839 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.839 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.839 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:53.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:07:53.839 00:07:53.839 --- 10.0.0.2 ping statistics --- 00:07:53.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.839 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:53.839 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:53.839 00:07:53.839 --- 10.0.0.1 ping statistics --- 00:07:53.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.840 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.840 18:39:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 ************************************ 00:07:54.110 START TEST nvmf_filesystem_no_in_capsule 00:07:54.110 ************************************ 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3484738 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3484738 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3484738 ']' 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.110 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 [2024-07-14 18:39:42.137065] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:54.110 [2024-07-14 18:39:42.137152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.110 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.110 [2024-07-14 18:39:42.202691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.110 [2024-07-14 18:39:42.294177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.110 [2024-07-14 18:39:42.294239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.110 [2024-07-14 18:39:42.294267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.110 [2024-07-14 18:39:42.294278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.110 [2024-07-14 18:39:42.294287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.110 [2024-07-14 18:39:42.294376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.110 [2024-07-14 18:39:42.294443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.110 [2024-07-14 18:39:42.294507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.110 [2024-07-14 18:39:42.294509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.434 [2024-07-14 18:39:42.451725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.434 Malloc1 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.434 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.692 [2024-07-14 18:39:42.639192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:54.692 { 00:07:54.692 "name": "Malloc1", 00:07:54.692 "aliases": [ 00:07:54.692 "e30b5acf-a7cd-4915-8028-90a1538d77ce" 00:07:54.692 ], 00:07:54.692 "product_name": "Malloc disk", 00:07:54.692 "block_size": 512, 00:07:54.692 "num_blocks": 1048576, 00:07:54.692 "uuid": "e30b5acf-a7cd-4915-8028-90a1538d77ce", 00:07:54.692 "assigned_rate_limits": { 00:07:54.692 "rw_ios_per_sec": 0, 00:07:54.692 "rw_mbytes_per_sec": 0, 00:07:54.692 "r_mbytes_per_sec": 0, 00:07:54.692 "w_mbytes_per_sec": 0 00:07:54.692 }, 00:07:54.692 "claimed": true, 00:07:54.692 "claim_type": "exclusive_write", 00:07:54.692 "zoned": false, 00:07:54.692 "supported_io_types": { 00:07:54.692 "read": true, 00:07:54.692 "write": true, 00:07:54.692 "unmap": true, 00:07:54.692 "flush": true, 00:07:54.692 "reset": true, 00:07:54.692 "nvme_admin": false, 00:07:54.692 "nvme_io": false, 00:07:54.692 "nvme_io_md": false, 00:07:54.692 "write_zeroes": true, 00:07:54.692 "zcopy": true, 00:07:54.692 "get_zone_info": false, 00:07:54.692 "zone_management": false, 00:07:54.692 "zone_append": false, 00:07:54.692 "compare": false, 00:07:54.692 "compare_and_write": false, 00:07:54.692 "abort": true, 00:07:54.692 "seek_hole": false, 00:07:54.692 "seek_data": false, 00:07:54.692 "copy": true, 00:07:54.692 "nvme_iov_md": false 00:07:54.692 }, 00:07:54.692 "memory_domains": [ 00:07:54.692 { 00:07:54.692 "dma_device_id": "system", 00:07:54.692 "dma_device_type": 1 00:07:54.692 }, 00:07:54.692 { 00:07:54.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.692 "dma_device_type": 2 00:07:54.692 } 00:07:54.692 ], 00:07:54.692 "driver_specific": {} 00:07:54.692 } 00:07:54.692 ]' 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:54.692 18:39:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.266 18:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.266 18:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.266 18:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.266 18:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.266 18:39:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.801 18:39:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:58.059 18:39:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 ************************************ 00:07:58.993 START TEST filesystem_ext4 00:07:58.993 ************************************ 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:58.993 18:39:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:58.993 mke2fs 1.46.5 (30-Dec-2021) 00:07:59.249 Discarding device blocks: 0/522240 done 00:07:59.249 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:59.249 Filesystem UUID: edff53d0-0410-41d2-b6e6-a3a1085273d4 00:07:59.249 Superblock backups stored on blocks: 00:07:59.249 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:59.249 00:07:59.249 Allocating group tables: 0/64 done 00:07:59.249 Writing inode tables: 0/64 done 00:07:59.507 Creating journal (8192 blocks): done 00:08:00.333 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:08:00.333 00:08:00.333 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:00.333 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.592 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3484738 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.852 00:08:00.852 real 0m1.687s 00:08:00.852 user 0m0.025s 00:08:00.852 sys 0m0.050s 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:00.852 ************************************ 00:08:00.852 END TEST filesystem_ext4 00:08:00.852 ************************************ 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.852 ************************************ 00:08:00.852 START TEST filesystem_btrfs 00:08:00.852 ************************************ 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:00.852 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:00.853 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:00.853 18:39:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:01.422 btrfs-progs v6.6.2 00:08:01.422 See https://btrfs.readthedocs.io for more information. 00:08:01.422 00:08:01.422 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:01.422 NOTE: several default settings have changed in version 5.15, please make sure 00:08:01.422 this does not affect your deployments: 00:08:01.422 - DUP for metadata (-m dup) 00:08:01.422 - enabled no-holes (-O no-holes) 00:08:01.422 - enabled free-space-tree (-R free-space-tree) 00:08:01.422 00:08:01.422 Label: (null) 00:08:01.422 UUID: 3096188f-8e3c-4401-bd12-7ef42025651e 00:08:01.422 Node size: 16384 00:08:01.422 Sector size: 4096 00:08:01.422 Filesystem size: 510.00MiB 00:08:01.422 Block group profiles: 00:08:01.422 Data: single 8.00MiB 00:08:01.422 Metadata: DUP 32.00MiB 00:08:01.422 System: DUP 8.00MiB 00:08:01.422 SSD detected: yes 00:08:01.422 Zoned device: no 00:08:01.422 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:01.422 Runtime features: free-space-tree 00:08:01.422 Checksum: crc32c 00:08:01.422 Number of devices: 1 00:08:01.422 Devices: 00:08:01.422 ID SIZE PATH 00:08:01.422 1 510.00MiB /dev/nvme0n1p1 00:08:01.422 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3484738 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.422 00:08:01.422 real 0m0.653s 00:08:01.422 user 0m0.014s 00:08:01.422 sys 0m0.114s 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:01.422 ************************************ 00:08:01.422 END TEST filesystem_btrfs 00:08:01.422 ************************************ 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.422 ************************************ 00:08:01.422 START TEST filesystem_xfs 00:08:01.422 ************************************ 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:01.422 18:39:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:01.682 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:01.682 = sectsz=512 attr=2, projid32bit=1 00:08:01.682 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:01.682 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:01.682 data = bsize=4096 blocks=130560, imaxpct=25 00:08:01.682 = sunit=0 swidth=0 blks 00:08:01.682 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:01.682 log =internal log bsize=4096 blocks=16384, version=2 00:08:01.682 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:01.682 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:02.621 Discarding blocks...Done. 00:08:02.621 18:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:02.621 18:39:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3484738 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.531 00:08:04.531 real 0m2.861s 00:08:04.531 user 0m0.018s 00:08:04.531 sys 0m0.057s 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.531 ************************************ 00:08:04.531 END TEST filesystem_xfs 00:08:04.531 ************************************ 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.531 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:04.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3484738 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3484738 ']' 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3484738 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3484738 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3484738' 00:08:04.790 killing process with pid 3484738 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3484738 00:08:04.790 18:39:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3484738 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:05.357 00:08:05.357 real 0m11.246s 00:08:05.357 user 0m43.195s 00:08:05.357 sys 0m1.691s 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.357 ************************************ 00:08:05.357 END TEST nvmf_filesystem_no_in_capsule 00:08:05.357 ************************************ 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.357 ************************************ 00:08:05.357 START TEST nvmf_filesystem_in_capsule 00:08:05.357 ************************************ 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3486298 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3486298 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3486298 ']' 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.357 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.357 [2024-07-14 18:39:53.436543] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:05.357 [2024-07-14 18:39:53.436627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.357 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.357 [2024-07-14 18:39:53.503591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.616 [2024-07-14 18:39:53.592779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.616 [2024-07-14 18:39:53.592837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.616 [2024-07-14 18:39:53.592854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.616 [2024-07-14 18:39:53.592868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.616 [2024-07-14 18:39:53.592886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.616 [2024-07-14 18:39:53.592946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.616 [2024-07-14 18:39:53.593001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.616 [2024-07-14 18:39:53.593118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.616 [2024-07-14 18:39:53.593120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.616 [2024-07-14 18:39:53.747779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.616 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.874 Malloc1 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.874 [2024-07-14 18:39:53.933275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.874 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:05.874 { 00:08:05.874 "name": "Malloc1", 00:08:05.874 "aliases": [ 00:08:05.874 "d4ef4688-e95a-4245-97f8-1f2a19d128b2" 00:08:05.874 ], 00:08:05.874 "product_name": "Malloc disk", 00:08:05.874 "block_size": 512, 00:08:05.874 "num_blocks": 1048576, 00:08:05.874 "uuid": "d4ef4688-e95a-4245-97f8-1f2a19d128b2", 00:08:05.874 "assigned_rate_limits": { 00:08:05.874 "rw_ios_per_sec": 0, 00:08:05.874 "rw_mbytes_per_sec": 0, 00:08:05.874 "r_mbytes_per_sec": 0, 00:08:05.874 "w_mbytes_per_sec": 0 00:08:05.874 }, 00:08:05.874 "claimed": true, 00:08:05.874 "claim_type": "exclusive_write", 00:08:05.874 "zoned": false, 00:08:05.874 "supported_io_types": { 00:08:05.874 "read": true, 00:08:05.874 "write": true, 00:08:05.874 "unmap": true, 00:08:05.874 "flush": true, 00:08:05.874 "reset": true, 00:08:05.874 "nvme_admin": false, 00:08:05.874 "nvme_io": false, 00:08:05.874 "nvme_io_md": false, 00:08:05.874 "write_zeroes": true, 00:08:05.874 "zcopy": true, 00:08:05.874 "get_zone_info": false, 00:08:05.874 "zone_management": false, 00:08:05.874 "zone_append": false, 00:08:05.874 "compare": false, 00:08:05.874 "compare_and_write": false, 00:08:05.874 "abort": true, 00:08:05.874 "seek_hole": false, 00:08:05.874 "seek_data": false, 00:08:05.874 "copy": true, 00:08:05.874 "nvme_iov_md": false 00:08:05.874 }, 00:08:05.874 "memory_domains": [ 00:08:05.874 { 00:08:05.874 "dma_device_id": "system", 00:08:05.874 "dma_device_type": 1 00:08:05.874 }, 00:08:05.874 { 00:08:05.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.875 "dma_device_type": 2 00:08:05.875 } 00:08:05.875 ], 00:08:05.875 "driver_specific": {} 00:08:05.875 } 00:08:05.875 ]' 00:08:05.875 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:05.875 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:05.875 18:39:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:05.875 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:05.875 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:05.875 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:05.875 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:05.875 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.442 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.442 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:06.442 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.442 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:06.442 18:39:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.977 18:39:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:08.977 18:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:09.236 18:39:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:10.174 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:10.174 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.174 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.174 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.174 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.432 ************************************ 00:08:10.432 START TEST filesystem_in_capsule_ext4 00:08:10.432 ************************************ 00:08:10.432 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.432 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:10.433 18:39:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.433 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.433 Discarding device blocks: 0/522240 done 00:08:10.433 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:10.433 Filesystem UUID: e4549cb6-92b7-450a-8d2a-3b3cd5506e24 00:08:10.433 Superblock backups stored on blocks: 00:08:10.433 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:10.433 00:08:10.433 Allocating group tables: 0/64 done 00:08:10.433 Writing inode tables: 0/64 done 00:08:10.691 Creating journal (8192 blocks): done 00:08:11.520 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:11.520 00:08:11.520 18:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:11.520 18:39:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3486298 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.453 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.454 00:08:12.454 real 0m2.182s 00:08:12.454 user 0m0.015s 00:08:12.454 sys 0m0.065s 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:12.454 ************************************ 00:08:12.454 END TEST filesystem_in_capsule_ext4 00:08:12.454 ************************************ 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.454 ************************************ 00:08:12.454 START TEST filesystem_in_capsule_btrfs 00:08:12.454 ************************************ 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.454 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:13.018 btrfs-progs v6.6.2 00:08:13.018 See https://btrfs.readthedocs.io for more information. 00:08:13.018 00:08:13.018 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:13.018 NOTE: several default settings have changed in version 5.15, please make sure 00:08:13.018 this does not affect your deployments: 00:08:13.018 - DUP for metadata (-m dup) 00:08:13.018 - enabled no-holes (-O no-holes) 00:08:13.018 - enabled free-space-tree (-R free-space-tree) 00:08:13.018 00:08:13.018 Label: (null) 00:08:13.018 UUID: c95e8bfb-222a-44ef-bbb4-b9523bb6475c 00:08:13.018 Node size: 16384 00:08:13.018 Sector size: 4096 00:08:13.018 Filesystem size: 510.00MiB 00:08:13.018 Block group profiles: 00:08:13.018 Data: single 8.00MiB 00:08:13.018 Metadata: DUP 32.00MiB 00:08:13.018 System: DUP 8.00MiB 00:08:13.018 SSD detected: yes 00:08:13.018 Zoned device: no 00:08:13.018 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:13.018 Runtime features: free-space-tree 00:08:13.018 Checksum: crc32c 00:08:13.018 Number of devices: 1 00:08:13.018 Devices: 00:08:13.018 ID SIZE PATH 00:08:13.018 1 510.00MiB /dev/nvme0n1p1 00:08:13.018 00:08:13.018 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:13.018 18:40:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3486298 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.275 00:08:13.275 real 0m0.699s 00:08:13.275 user 0m0.017s 00:08:13.275 sys 0m0.113s 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 ************************************ 00:08:13.275 END TEST filesystem_in_capsule_btrfs 00:08:13.275 ************************************ 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.275 ************************************ 00:08:13.275 START TEST filesystem_in_capsule_xfs 00:08:13.275 ************************************ 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:13.275 18:40:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:13.275 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:13.275 = sectsz=512 attr=2, projid32bit=1 00:08:13.275 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:13.275 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:13.275 data = bsize=4096 blocks=130560, imaxpct=25 00:08:13.275 = sunit=0 swidth=0 blks 00:08:13.275 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:13.275 log =internal log bsize=4096 blocks=16384, version=2 00:08:13.275 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:13.275 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:14.206 Discarding blocks...Done. 00:08:14.206 18:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:14.206 18:40:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:16.128 18:40:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3486298 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.128 00:08:16.128 real 0m2.660s 00:08:16.128 user 0m0.012s 00:08:16.128 sys 0m0.062s 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:16.128 ************************************ 00:08:16.128 END TEST filesystem_in_capsule_xfs 00:08:16.128 ************************************ 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:16.128 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.385 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3486298 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3486298 ']' 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3486298 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3486298 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3486298' 00:08:16.386 killing process with pid 3486298 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3486298 00:08:16.386 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3486298 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.951 00:08:16.951 real 0m11.522s 00:08:16.951 user 0m44.223s 00:08:16.951 sys 0m1.748s 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.951 ************************************ 00:08:16.951 END TEST nvmf_filesystem_in_capsule 00:08:16.951 ************************************ 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.951 rmmod nvme_tcp 00:08:16.951 rmmod nvme_fabrics 00:08:16.951 rmmod nvme_keyring 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.951 18:40:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.853 18:40:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.853 00:08:18.853 real 0m27.256s 00:08:18.853 user 1m28.308s 00:08:18.853 sys 0m5.032s 00:08:18.853 18:40:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.853 18:40:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.853 ************************************ 00:08:18.853 END TEST nvmf_filesystem 00:08:18.853 ************************************ 00:08:18.853 18:40:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.853 18:40:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:18.853 18:40:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.853 18:40:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.853 18:40:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.111 ************************************ 00:08:19.111 START TEST nvmf_target_discovery 00:08:19.111 ************************************ 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:19.111 * Looking for test storage... 00:08:19.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.111 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.112 18:40:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.012 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.013 18:40:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:08:21.013 00:08:21.013 --- 10.0.0.2 ping statistics --- 00:08:21.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.013 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:21.013 00:08:21.013 --- 10.0.0.1 ping statistics --- 00:08:21.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.013 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3489736 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3489736 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3489736 ']' 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.013 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.013 [2024-07-14 18:40:09.199489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:21.013 [2024-07-14 18:40:09.199570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.013 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.272 [2024-07-14 18:40:09.267188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.272 [2024-07-14 18:40:09.359800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.272 [2024-07-14 18:40:09.359856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.272 [2024-07-14 18:40:09.359893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.272 [2024-07-14 18:40:09.359908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.272 [2024-07-14 18:40:09.359920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.272 [2024-07-14 18:40:09.360008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.272 [2024-07-14 18:40:09.360066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.272 [2024-07-14 18:40:09.360189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.272 [2024-07-14 18:40:09.360191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.272 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.272 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:21.272 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.272 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.272 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 [2024-07-14 18:40:09.511809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 Null1 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 [2024-07-14 18:40:09.552129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 Null2 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 Null3 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.530 Null4 00:08:21.530 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.531 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:21.788 00:08:21.788 Discovery Log Number of Records 6, Generation counter 6 00:08:21.788 =====Discovery Log Entry 0====== 00:08:21.788 trtype: tcp 00:08:21.788 adrfam: ipv4 00:08:21.788 subtype: current discovery subsystem 00:08:21.788 treq: not required 00:08:21.788 portid: 0 00:08:21.788 trsvcid: 4420 00:08:21.788 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:21.788 traddr: 10.0.0.2 00:08:21.788 eflags: explicit discovery connections, duplicate discovery information 00:08:21.788 sectype: none 00:08:21.788 =====Discovery Log Entry 1====== 00:08:21.788 trtype: tcp 00:08:21.788 adrfam: ipv4 00:08:21.788 subtype: nvme subsystem 00:08:21.788 treq: not required 00:08:21.788 portid: 0 00:08:21.788 trsvcid: 4420 00:08:21.788 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:21.788 traddr: 10.0.0.2 00:08:21.788 eflags: none 00:08:21.788 sectype: none 00:08:21.788 =====Discovery Log Entry 2====== 00:08:21.788 trtype: tcp 00:08:21.788 adrfam: ipv4 00:08:21.788 subtype: nvme subsystem 00:08:21.788 treq: not required 00:08:21.788 portid: 0 00:08:21.789 trsvcid: 4420 00:08:21.789 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:21.789 traddr: 10.0.0.2 00:08:21.789 eflags: none 00:08:21.789 sectype: none 00:08:21.789 =====Discovery Log Entry 3====== 00:08:21.789 trtype: tcp 00:08:21.789 adrfam: ipv4 00:08:21.789 subtype: nvme subsystem 00:08:21.789 treq: not required 00:08:21.789 portid: 0 00:08:21.789 trsvcid: 4420 00:08:21.789 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:21.789 traddr: 10.0.0.2 00:08:21.789 eflags: none 00:08:21.789 sectype: none 00:08:21.789 =====Discovery Log Entry 4====== 00:08:21.789 trtype: tcp 00:08:21.789 adrfam: ipv4 00:08:21.789 subtype: nvme subsystem 00:08:21.789 treq: not required 00:08:21.789 portid: 0 00:08:21.789 trsvcid: 4420 00:08:21.789 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:21.789 traddr: 10.0.0.2 00:08:21.789 eflags: none 00:08:21.789 sectype: none 00:08:21.789 =====Discovery Log Entry 5====== 00:08:21.789 trtype: tcp 00:08:21.789 adrfam: ipv4 00:08:21.789 subtype: discovery subsystem referral 00:08:21.789 treq: not required 00:08:21.789 portid: 0 00:08:21.789 trsvcid: 4430 00:08:21.789 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:21.789 traddr: 10.0.0.2 00:08:21.789 eflags: none 00:08:21.789 sectype: none 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:21.789 Perform nvmf subsystem discovery via RPC 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 [ 00:08:21.789 { 00:08:21.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:21.789 "subtype": "Discovery", 00:08:21.789 "listen_addresses": [ 00:08:21.789 { 00:08:21.789 "trtype": "TCP", 00:08:21.789 "adrfam": "IPv4", 00:08:21.789 "traddr": "10.0.0.2", 00:08:21.789 "trsvcid": "4420" 00:08:21.789 } 00:08:21.789 ], 00:08:21.789 "allow_any_host": true, 00:08:21.789 "hosts": [] 00:08:21.789 }, 00:08:21.789 { 00:08:21.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:21.789 "subtype": "NVMe", 00:08:21.789 "listen_addresses": [ 00:08:21.789 { 00:08:21.789 "trtype": "TCP", 00:08:21.789 "adrfam": "IPv4", 00:08:21.789 "traddr": "10.0.0.2", 00:08:21.789 "trsvcid": "4420" 00:08:21.789 } 00:08:21.789 ], 00:08:21.789 "allow_any_host": true, 00:08:21.789 "hosts": [], 00:08:21.789 "serial_number": "SPDK00000000000001", 00:08:21.789 "model_number": "SPDK bdev Controller", 00:08:21.789 "max_namespaces": 32, 00:08:21.789 "min_cntlid": 1, 00:08:21.789 "max_cntlid": 65519, 00:08:21.789 "namespaces": [ 00:08:21.789 { 00:08:21.789 "nsid": 1, 00:08:21.789 "bdev_name": "Null1", 00:08:21.789 "name": "Null1", 00:08:21.789 "nguid": "196AB70C0EF9463E8679334E09DD89F3", 00:08:21.789 "uuid": "196ab70c-0ef9-463e-8679-334e09dd89f3" 00:08:21.789 } 00:08:21.789 ] 00:08:21.789 }, 00:08:21.789 { 00:08:21.789 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:21.789 "subtype": "NVMe", 00:08:21.789 "listen_addresses": [ 00:08:21.789 { 00:08:21.789 "trtype": "TCP", 00:08:21.789 "adrfam": "IPv4", 00:08:21.789 "traddr": "10.0.0.2", 00:08:21.789 "trsvcid": "4420" 00:08:21.789 } 00:08:21.789 ], 00:08:21.789 "allow_any_host": true, 00:08:21.789 "hosts": [], 00:08:21.789 "serial_number": "SPDK00000000000002", 00:08:21.789 "model_number": "SPDK bdev Controller", 00:08:21.789 "max_namespaces": 32, 00:08:21.789 "min_cntlid": 1, 00:08:21.789 "max_cntlid": 65519, 00:08:21.789 "namespaces": [ 00:08:21.789 { 00:08:21.789 "nsid": 1, 00:08:21.789 "bdev_name": "Null2", 00:08:21.789 "name": "Null2", 00:08:21.789 "nguid": "4F8CEC97FCA34DE0805883B6FD601725", 00:08:21.789 "uuid": "4f8cec97-fca3-4de0-8058-83b6fd601725" 00:08:21.789 } 00:08:21.789 ] 00:08:21.789 }, 00:08:21.789 { 00:08:21.789 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:21.789 "subtype": "NVMe", 00:08:21.789 "listen_addresses": [ 00:08:21.789 { 00:08:21.789 "trtype": "TCP", 00:08:21.789 "adrfam": "IPv4", 00:08:21.789 "traddr": "10.0.0.2", 00:08:21.789 "trsvcid": "4420" 00:08:21.789 } 00:08:21.789 ], 00:08:21.789 "allow_any_host": true, 00:08:21.789 "hosts": [], 00:08:21.789 "serial_number": "SPDK00000000000003", 00:08:21.789 "model_number": "SPDK bdev Controller", 00:08:21.789 "max_namespaces": 32, 00:08:21.789 "min_cntlid": 1, 00:08:21.789 "max_cntlid": 65519, 00:08:21.789 "namespaces": [ 00:08:21.789 { 00:08:21.789 "nsid": 1, 00:08:21.789 "bdev_name": "Null3", 00:08:21.789 "name": "Null3", 00:08:21.789 "nguid": "5A3EFE2497664EC385E640E6B60722E7", 00:08:21.789 "uuid": "5a3efe24-9766-4ec3-85e6-40e6b60722e7" 00:08:21.789 } 00:08:21.789 ] 00:08:21.789 }, 00:08:21.789 { 00:08:21.789 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:21.789 "subtype": "NVMe", 00:08:21.789 "listen_addresses": [ 00:08:21.789 { 00:08:21.789 "trtype": "TCP", 00:08:21.789 "adrfam": "IPv4", 00:08:21.789 "traddr": "10.0.0.2", 00:08:21.789 "trsvcid": "4420" 00:08:21.789 } 00:08:21.789 ], 00:08:21.789 "allow_any_host": true, 00:08:21.789 "hosts": [], 00:08:21.789 "serial_number": "SPDK00000000000004", 00:08:21.789 "model_number": "SPDK bdev Controller", 00:08:21.789 "max_namespaces": 32, 00:08:21.789 "min_cntlid": 1, 00:08:21.789 "max_cntlid": 65519, 00:08:21.789 "namespaces": [ 00:08:21.789 { 00:08:21.789 "nsid": 1, 00:08:21.789 "bdev_name": "Null4", 00:08:21.789 "name": "Null4", 00:08:21.789 "nguid": "0AA73BC8CA604A83A9FCB1CB3AFE1163", 00:08:21.789 "uuid": "0aa73bc8-ca60-4a83-a9fc-b1cb3afe1163" 00:08:21.789 } 00:08:21.789 ] 00:08:21.789 } 00:08:21.789 ] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 18:40:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.047 rmmod nvme_tcp 00:08:22.047 rmmod nvme_fabrics 00:08:22.047 rmmod nvme_keyring 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3489736 ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3489736 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3489736 ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3489736 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3489736 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3489736' 00:08:22.047 killing process with pid 3489736 00:08:22.047 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3489736 00:08:22.048 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3489736 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.306 18:40:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.209 18:40:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.209 00:08:24.209 real 0m5.287s 00:08:24.209 user 0m4.582s 00:08:24.209 sys 0m1.694s 00:08:24.209 18:40:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.209 18:40:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.209 ************************************ 00:08:24.209 END TEST nvmf_target_discovery 00:08:24.209 ************************************ 00:08:24.209 18:40:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.209 18:40:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.209 18:40:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.209 18:40:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.209 18:40:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.209 ************************************ 00:08:24.209 START TEST nvmf_referrals 00:08:24.209 ************************************ 00:08:24.209 18:40:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.467 * Looking for test storage... 00:08:24.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.467 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.468 18:40:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.368 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:26.369 00:08:26.369 --- 10.0.0.2 ping statistics --- 00:08:26.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.369 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:26.369 00:08:26.369 --- 10.0.0.1 ping statistics --- 00:08:26.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.369 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:26.369 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3491748 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3491748 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3491748 ']' 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.628 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.628 [2024-07-14 18:40:14.665376] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:26.628 [2024-07-14 18:40:14.665471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.628 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.628 [2024-07-14 18:40:14.738763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.628 [2024-07-14 18:40:14.833448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.628 [2024-07-14 18:40:14.833509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.628 [2024-07-14 18:40:14.833524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.628 [2024-07-14 18:40:14.833538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.628 [2024-07-14 18:40:14.833550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.628 [2024-07-14 18:40:14.833610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.628 [2024-07-14 18:40:14.833666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.628 [2024-07-14 18:40:14.833779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.628 [2024-07-14 18:40:14.833782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.886 [2024-07-14 18:40:14.983776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.886 [2024-07-14 18:40:14.996047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.886 18:40:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.887 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.887 18:40:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.887 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.145 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.403 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.662 18:40:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.920 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:28.177 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:28.178 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:28.178 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.435 rmmod nvme_tcp 00:08:28.435 rmmod nvme_fabrics 00:08:28.435 rmmod nvme_keyring 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3491748 ']' 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3491748 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3491748 ']' 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3491748 00:08:28.435 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491748 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491748' 00:08:28.436 killing process with pid 3491748 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3491748 00:08:28.436 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3491748 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.695 18:40:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.600 18:40:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.600 00:08:30.600 real 0m6.363s 00:08:30.600 user 0m8.907s 00:08:30.600 sys 0m2.096s 00:08:30.600 18:40:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.600 18:40:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.600 ************************************ 00:08:30.600 END TEST nvmf_referrals 00:08:30.600 ************************************ 00:08:30.600 18:40:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.600 18:40:18 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.600 18:40:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.600 18:40:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.600 18:40:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.859 ************************************ 00:08:30.859 START TEST nvmf_connect_disconnect 00:08:30.859 ************************************ 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.859 * Looking for test storage... 00:08:30.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.859 18:40:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.760 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:08:32.761 00:08:32.761 --- 10.0.0.2 ping statistics --- 00:08:32.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.761 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:32.761 00:08:32.761 --- 10.0.0.1 ping statistics --- 00:08:32.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.761 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.761 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3494034 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.019 18:40:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3494034 00:08:33.019 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3494034 ']' 00:08:33.019 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.019 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.019 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.020 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.020 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.020 [2024-07-14 18:40:21.048932] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:33.020 [2024-07-14 18:40:21.049016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.020 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.020 [2024-07-14 18:40:21.115454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.020 [2024-07-14 18:40:21.204749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.020 [2024-07-14 18:40:21.204806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.020 [2024-07-14 18:40:21.204820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.020 [2024-07-14 18:40:21.204831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.020 [2024-07-14 18:40:21.204840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.020 [2024-07-14 18:40:21.204937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.020 [2024-07-14 18:40:21.205066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.020 [2024-07-14 18:40:21.205135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.020 [2024-07-14 18:40:21.205133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 [2024-07-14 18:40:21.356692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.278 [2024-07-14 18:40:21.413954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:33.278 18:40:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:35.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.613 [2024-07-14 18:40:46.806597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11abc40 is same with the state(5) to be set 00:08:58.613 [2024-07-14 18:40:46.806652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11abc40 is same with the state(5) to be set 00:08:58.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.316 [2024-07-14 18:41:56.159632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11abc40 is same with the state(5) to be set 00:10:08.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.737 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.737 rmmod nvme_tcp 00:12:23.737 rmmod nvme_fabrics 00:12:23.737 rmmod nvme_keyring 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3494034 ']' 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3494034 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3494034 ']' 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3494034 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:23.995 18:44:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3494034 00:12:23.995 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:23.995 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:23.995 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3494034' 00:12:23.995 killing process with pid 3494034 00:12:23.995 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3494034 00:12:23.995 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3494034 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.254 18:44:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.155 18:44:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:26.155 00:12:26.155 real 3m55.473s 00:12:26.155 user 14m57.483s 00:12:26.155 sys 0m34.136s 00:12:26.155 18:44:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:26.155 18:44:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 ************************************ 00:12:26.155 END TEST nvmf_connect_disconnect 00:12:26.155 ************************************ 00:12:26.155 18:44:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:26.155 18:44:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.155 18:44:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:26.155 18:44:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.155 18:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.155 ************************************ 00:12:26.155 START TEST nvmf_multitarget 00:12:26.155 ************************************ 00:12:26.155 18:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.413 * Looking for test storage... 00:12:26.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:26.413 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:26.414 18:44:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:26.414 18:44:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:28.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.315 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:28.316 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:28.316 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:28.316 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:12:28.316 00:12:28.316 --- 10.0.0.2 ping statistics --- 00:12:28.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.316 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:12:28.316 00:12:28.316 --- 10.0.0.1 ping statistics --- 00:12:28.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.316 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3525100 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3525100 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3525100 ']' 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.316 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.576 [2024-07-14 18:44:16.554258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:28.576 [2024-07-14 18:44:16.554332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.576 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.576 [2024-07-14 18:44:16.618075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.576 [2024-07-14 18:44:16.701997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.576 [2024-07-14 18:44:16.702047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.576 [2024-07-14 18:44:16.702076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.576 [2024-07-14 18:44:16.702095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.576 [2024-07-14 18:44:16.702105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.576 [2024-07-14 18:44:16.702171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.576 [2024-07-14 18:44:16.702229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.576 [2024-07-14 18:44:16.702279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.576 [2024-07-14 18:44:16.702281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:28.834 18:44:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:29.092 "nvmf_tgt_1" 00:12:29.092 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:29.092 "nvmf_tgt_2" 00:12:29.092 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.092 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:29.092 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:29.092 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:29.350 true 00:12:29.350 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:29.350 true 00:12:29.350 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.350 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.609 rmmod nvme_tcp 00:12:29.609 rmmod nvme_fabrics 00:12:29.609 rmmod nvme_keyring 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3525100 ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3525100 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3525100 ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3525100 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525100 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525100' 00:12:29.609 killing process with pid 3525100 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3525100 00:12:29.609 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3525100 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.867 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.868 18:44:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.404 18:44:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.404 00:12:32.404 real 0m5.661s 00:12:32.404 user 0m6.557s 00:12:32.404 sys 0m1.846s 00:12:32.404 18:44:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.404 18:44:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.404 ************************************ 00:12:32.404 END TEST nvmf_multitarget 00:12:32.404 ************************************ 00:12:32.404 18:44:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:32.404 18:44:20 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.404 18:44:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:32.404 18:44:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.404 18:44:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:32.404 ************************************ 00:12:32.404 START TEST nvmf_rpc 00:12:32.404 ************************************ 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.404 * Looking for test storage... 00:12:32.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.404 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.405 18:44:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.312 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:34.313 00:12:34.313 --- 10.0.0.2 ping statistics --- 00:12:34.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.313 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:12:34.313 00:12:34.313 --- 10.0.0.1 ping statistics --- 00:12:34.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.313 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.313 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3527199 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3527199 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3527199 ']' 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.314 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.314 [2024-07-14 18:44:22.392062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:34.314 [2024-07-14 18:44:22.392157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.314 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.314 [2024-07-14 18:44:22.460495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.572 [2024-07-14 18:44:22.550107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.573 [2024-07-14 18:44:22.550179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.573 [2024-07-14 18:44:22.550201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.573 [2024-07-14 18:44:22.550212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.573 [2024-07-14 18:44:22.550237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.573 [2024-07-14 18:44:22.550325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.573 [2024-07-14 18:44:22.550393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.573 [2024-07-14 18:44:22.550443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.573 [2024-07-14 18:44:22.550445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:34.573 "tick_rate": 2700000000, 00:12:34.573 "poll_groups": [ 00:12:34.573 { 00:12:34.573 "name": "nvmf_tgt_poll_group_000", 00:12:34.573 "admin_qpairs": 0, 00:12:34.573 "io_qpairs": 0, 00:12:34.573 "current_admin_qpairs": 0, 00:12:34.573 "current_io_qpairs": 0, 00:12:34.573 "pending_bdev_io": 0, 00:12:34.573 "completed_nvme_io": 0, 00:12:34.573 "transports": [] 00:12:34.573 }, 00:12:34.573 { 00:12:34.573 "name": "nvmf_tgt_poll_group_001", 00:12:34.573 "admin_qpairs": 0, 00:12:34.573 "io_qpairs": 0, 00:12:34.573 "current_admin_qpairs": 0, 00:12:34.573 "current_io_qpairs": 0, 00:12:34.573 "pending_bdev_io": 0, 00:12:34.573 "completed_nvme_io": 0, 00:12:34.573 "transports": [] 00:12:34.573 }, 00:12:34.573 { 00:12:34.573 "name": "nvmf_tgt_poll_group_002", 00:12:34.573 "admin_qpairs": 0, 00:12:34.573 "io_qpairs": 0, 00:12:34.573 "current_admin_qpairs": 0, 00:12:34.573 "current_io_qpairs": 0, 00:12:34.573 "pending_bdev_io": 0, 00:12:34.573 "completed_nvme_io": 0, 00:12:34.573 "transports": [] 00:12:34.573 }, 00:12:34.573 { 00:12:34.573 "name": "nvmf_tgt_poll_group_003", 00:12:34.573 "admin_qpairs": 0, 00:12:34.573 "io_qpairs": 0, 00:12:34.573 "current_admin_qpairs": 0, 00:12:34.573 "current_io_qpairs": 0, 00:12:34.573 "pending_bdev_io": 0, 00:12:34.573 "completed_nvme_io": 0, 00:12:34.573 "transports": [] 00:12:34.573 } 00:12:34.573 ] 00:12:34.573 }' 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:34.573 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 [2024-07-14 18:44:22.803007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:34.833 "tick_rate": 2700000000, 00:12:34.833 "poll_groups": [ 00:12:34.833 { 00:12:34.833 "name": "nvmf_tgt_poll_group_000", 00:12:34.833 "admin_qpairs": 0, 00:12:34.833 "io_qpairs": 0, 00:12:34.833 "current_admin_qpairs": 0, 00:12:34.833 "current_io_qpairs": 0, 00:12:34.833 "pending_bdev_io": 0, 00:12:34.833 "completed_nvme_io": 0, 00:12:34.833 "transports": [ 00:12:34.833 { 00:12:34.833 "trtype": "TCP" 00:12:34.833 } 00:12:34.833 ] 00:12:34.833 }, 00:12:34.833 { 00:12:34.833 "name": "nvmf_tgt_poll_group_001", 00:12:34.833 "admin_qpairs": 0, 00:12:34.833 "io_qpairs": 0, 00:12:34.833 "current_admin_qpairs": 0, 00:12:34.833 "current_io_qpairs": 0, 00:12:34.833 "pending_bdev_io": 0, 00:12:34.833 "completed_nvme_io": 0, 00:12:34.833 "transports": [ 00:12:34.833 { 00:12:34.833 "trtype": "TCP" 00:12:34.833 } 00:12:34.833 ] 00:12:34.833 }, 00:12:34.833 { 00:12:34.833 "name": "nvmf_tgt_poll_group_002", 00:12:34.833 "admin_qpairs": 0, 00:12:34.833 "io_qpairs": 0, 00:12:34.833 "current_admin_qpairs": 0, 00:12:34.833 "current_io_qpairs": 0, 00:12:34.833 "pending_bdev_io": 0, 00:12:34.833 "completed_nvme_io": 0, 00:12:34.833 "transports": [ 00:12:34.833 { 00:12:34.833 "trtype": "TCP" 00:12:34.833 } 00:12:34.833 ] 00:12:34.833 }, 00:12:34.833 { 00:12:34.833 "name": "nvmf_tgt_poll_group_003", 00:12:34.833 "admin_qpairs": 0, 00:12:34.833 "io_qpairs": 0, 00:12:34.833 "current_admin_qpairs": 0, 00:12:34.833 "current_io_qpairs": 0, 00:12:34.833 "pending_bdev_io": 0, 00:12:34.833 "completed_nvme_io": 0, 00:12:34.833 "transports": [ 00:12:34.833 { 00:12:34.833 "trtype": "TCP" 00:12:34.833 } 00:12:34.833 ] 00:12:34.833 } 00:12:34.833 ] 00:12:34.833 }' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 Malloc1 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.833 [2024-07-14 18:44:22.958938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.833 18:44:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:34.834 [2024-07-14 18:44:22.981452] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:34.834 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.834 could not add new controller: failed to write to nvme-fabrics device 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.834 18:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.401 18:44:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.401 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.401 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.401 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:35.401 18:44:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.963 [2024-07-14 18:44:25.783091] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:37.963 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.963 could not add new controller: failed to write to nvme-fabrics device 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:37.963 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.964 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.964 18:44:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.964 18:44:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.532 18:44:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.532 18:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.532 18:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.532 18:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.532 18:44:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.466 [2024-07-14 18:44:28.613615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.466 18:44:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.033 18:44:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.033 18:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.033 18:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.033 18:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.033 18:44:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.569 [2024-07-14 18:44:31.398345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.569 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.570 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.570 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.570 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.570 18:44:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.570 18:44:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.828 18:44:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:43.828 18:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.828 18:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.828 18:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:43.828 18:44:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.363 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.363 [2024-07-14 18:44:34.126510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.364 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.623 18:44:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.623 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.623 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.623 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.623 18:44:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 [2024-07-14 18:44:36.878978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.178 18:44:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.439 18:44:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.439 18:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.439 18:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.439 18:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.439 18:44:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 [2024-07-14 18:44:39.697063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.974 18:44:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.233 18:44:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.233 18:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.233 18:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.233 18:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.233 18:44:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:54.139 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 [2024-07-14 18:44:42.441136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 [2024-07-14 18:44:42.489199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.398 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 [2024-07-14 18:44:42.537379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 [2024-07-14 18:44:42.585519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.399 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 [2024-07-14 18:44:42.633694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:54.657 "tick_rate": 2700000000, 00:12:54.657 "poll_groups": [ 00:12:54.657 { 00:12:54.657 "name": "nvmf_tgt_poll_group_000", 00:12:54.657 "admin_qpairs": 2, 00:12:54.657 "io_qpairs": 84, 00:12:54.657 "current_admin_qpairs": 0, 00:12:54.657 "current_io_qpairs": 0, 00:12:54.657 "pending_bdev_io": 0, 00:12:54.657 "completed_nvme_io": 135, 00:12:54.657 "transports": [ 00:12:54.657 { 00:12:54.657 "trtype": "TCP" 00:12:54.657 } 00:12:54.657 ] 00:12:54.657 }, 00:12:54.657 { 00:12:54.657 "name": "nvmf_tgt_poll_group_001", 00:12:54.657 "admin_qpairs": 2, 00:12:54.657 "io_qpairs": 84, 00:12:54.657 "current_admin_qpairs": 0, 00:12:54.657 "current_io_qpairs": 0, 00:12:54.657 "pending_bdev_io": 0, 00:12:54.657 "completed_nvme_io": 228, 00:12:54.657 "transports": [ 00:12:54.657 { 00:12:54.657 "trtype": "TCP" 00:12:54.657 } 00:12:54.657 ] 00:12:54.657 }, 00:12:54.657 { 00:12:54.657 "name": "nvmf_tgt_poll_group_002", 00:12:54.657 "admin_qpairs": 1, 00:12:54.657 "io_qpairs": 84, 00:12:54.657 "current_admin_qpairs": 0, 00:12:54.657 "current_io_qpairs": 0, 00:12:54.657 "pending_bdev_io": 0, 00:12:54.657 "completed_nvme_io": 113, 00:12:54.657 "transports": [ 00:12:54.657 { 00:12:54.657 "trtype": "TCP" 00:12:54.657 } 00:12:54.657 ] 00:12:54.657 }, 00:12:54.657 { 00:12:54.657 "name": "nvmf_tgt_poll_group_003", 00:12:54.657 "admin_qpairs": 2, 00:12:54.657 "io_qpairs": 84, 00:12:54.657 "current_admin_qpairs": 0, 00:12:54.657 "current_io_qpairs": 0, 00:12:54.657 "pending_bdev_io": 0, 00:12:54.657 "completed_nvme_io": 210, 00:12:54.657 "transports": [ 00:12:54.657 { 00:12:54.657 "trtype": "TCP" 00:12:54.657 } 00:12:54.657 ] 00:12:54.657 } 00:12:54.657 ] 00:12:54.657 }' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.657 rmmod nvme_tcp 00:12:54.657 rmmod nvme_fabrics 00:12:54.657 rmmod nvme_keyring 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3527199 ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3527199 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3527199 ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3527199 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3527199 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3527199' 00:12:54.657 killing process with pid 3527199 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3527199 00:12:54.657 18:44:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3527199 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.916 18:44:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.477 18:44:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.477 00:12:57.477 real 0m25.067s 00:12:57.477 user 1m21.468s 00:12:57.477 sys 0m3.955s 00:12:57.477 18:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.477 18:44:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.477 ************************************ 00:12:57.477 END TEST nvmf_rpc 00:12:57.477 ************************************ 00:12:57.477 18:44:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.477 18:44:45 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.477 18:44:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.477 18:44:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.477 18:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.477 ************************************ 00:12:57.477 START TEST nvmf_invalid 00:12:57.477 ************************************ 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:57.477 * Looking for test storage... 00:12:57.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.477 18:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.478 18:44:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.402 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:59.403 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:59.403 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:59.403 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:59.403 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:12:59.403 00:12:59.403 --- 10.0.0.2 ping statistics --- 00:12:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.403 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:59.403 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:59.403 00:12:59.404 --- 10.0.0.1 ping statistics --- 00:12:59.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.404 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3531680 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3531680 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3531680 ']' 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.404 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.404 [2024-07-14 18:44:47.379330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:59.404 [2024-07-14 18:44:47.379404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.404 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.404 [2024-07-14 18:44:47.448708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.404 [2024-07-14 18:44:47.542139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.404 [2024-07-14 18:44:47.542202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.404 [2024-07-14 18:44:47.542218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.404 [2024-07-14 18:44:47.542232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.404 [2024-07-14 18:44:47.542243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.404 [2024-07-14 18:44:47.542324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.404 [2024-07-14 18:44:47.542382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.404 [2024-07-14 18:44:47.542432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.404 [2024-07-14 18:44:47.542435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:59.662 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8694 00:12:59.919 [2024-07-14 18:44:47.967671] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:59.919 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:59.919 { 00:12:59.919 "nqn": "nqn.2016-06.io.spdk:cnode8694", 00:12:59.919 "tgt_name": "foobar", 00:12:59.919 "method": "nvmf_create_subsystem", 00:12:59.919 "req_id": 1 00:12:59.919 } 00:12:59.919 Got JSON-RPC error response 00:12:59.919 response: 00:12:59.919 { 00:12:59.919 "code": -32603, 00:12:59.919 "message": "Unable to find target foobar" 00:12:59.919 }' 00:12:59.919 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:59.919 { 00:12:59.919 "nqn": "nqn.2016-06.io.spdk:cnode8694", 00:12:59.919 "tgt_name": "foobar", 00:12:59.919 "method": "nvmf_create_subsystem", 00:12:59.919 "req_id": 1 00:12:59.919 } 00:12:59.919 Got JSON-RPC error response 00:12:59.919 response: 00:12:59.919 { 00:12:59.919 "code": -32603, 00:12:59.919 "message": "Unable to find target foobar" 00:12:59.919 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:59.919 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:59.919 18:44:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19892 00:13:00.177 [2024-07-14 18:44:48.240624] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19892: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:00.177 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:00.177 { 00:13:00.177 "nqn": "nqn.2016-06.io.spdk:cnode19892", 00:13:00.177 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.178 "method": "nvmf_create_subsystem", 00:13:00.178 "req_id": 1 00:13:00.178 } 00:13:00.178 Got JSON-RPC error response 00:13:00.178 response: 00:13:00.178 { 00:13:00.178 "code": -32602, 00:13:00.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.178 }' 00:13:00.178 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:00.178 { 00:13:00.178 "nqn": "nqn.2016-06.io.spdk:cnode19892", 00:13:00.178 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:00.178 "method": "nvmf_create_subsystem", 00:13:00.178 "req_id": 1 00:13:00.178 } 00:13:00.178 Got JSON-RPC error response 00:13:00.178 response: 00:13:00.178 { 00:13:00.178 "code": -32602, 00:13:00.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:00.178 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:00.178 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:00.178 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4516 00:13:00.436 [2024-07-14 18:44:48.489472] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4516: invalid model number 'SPDK_Controller' 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:00.436 { 00:13:00.436 "nqn": "nqn.2016-06.io.spdk:cnode4516", 00:13:00.436 "model_number": "SPDK_Controller\u001f", 00:13:00.436 "method": "nvmf_create_subsystem", 00:13:00.436 "req_id": 1 00:13:00.436 } 00:13:00.436 Got JSON-RPC error response 00:13:00.436 response: 00:13:00.436 { 00:13:00.436 "code": -32602, 00:13:00.436 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.436 }' 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:00.436 { 00:13:00.436 "nqn": "nqn.2016-06.io.spdk:cnode4516", 00:13:00.436 "model_number": "SPDK_Controller\u001f", 00:13:00.436 "method": "nvmf_create_subsystem", 00:13:00.436 "req_id": 1 00:13:00.436 } 00:13:00.436 Got JSON-RPC error response 00:13:00.436 response: 00:13:00.436 { 00:13:00.436 "code": -32602, 00:13:00.436 "message": "Invalid MN SPDK_Controller\u001f" 00:13:00.436 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.436 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:13:00.437 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_*=QPJ4nZg}oP3;tjj#|6euWx5[yl:2>"VOO;6w[' 00:13:00.957 18:44:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ' sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>"VOO;6w[' nqn.2016-06.io.spdk:cnode25652 00:13:01.214 [2024-07-14 18:44:49.211942] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25652: invalid model number ' sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>"VOO;6w[' 00:13:01.214 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:01.214 { 00:13:01.214 "nqn": "nqn.2016-06.io.spdk:cnode25652", 00:13:01.214 "model_number": " sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>\"VOO;6w[", 00:13:01.214 "method": "nvmf_create_subsystem", 00:13:01.214 "req_id": 1 00:13:01.214 } 00:13:01.214 Got JSON-RPC error response 00:13:01.214 response: 00:13:01.214 { 00:13:01.214 "code": -32602, 00:13:01.214 "message": "Invalid MN sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>\"VOO;6w[" 00:13:01.214 }' 00:13:01.214 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:01.214 { 00:13:01.214 "nqn": "nqn.2016-06.io.spdk:cnode25652", 00:13:01.214 "model_number": " sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>\"VOO;6w[", 00:13:01.214 "method": "nvmf_create_subsystem", 00:13:01.214 "req_id": 1 00:13:01.214 } 00:13:01.214 Got JSON-RPC error response 00:13:01.214 response: 00:13:01.214 { 00:13:01.214 "code": -32602, 00:13:01.214 "message": "Invalid MN sFl3&cMH,,>oP3;tjj#|6euWx5[yl:2>\"VOO;6w[" 00:13:01.214 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:01.214 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:01.472 [2024-07-14 18:44:49.456831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.472 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:01.730 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:01.730 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:01.730 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:01.730 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:01.730 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:01.988 [2024-07-14 18:44:49.958468] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:01.988 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:01.989 { 00:13:01.989 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:01.989 "listen_address": { 00:13:01.989 "trtype": "tcp", 00:13:01.989 "traddr": "", 00:13:01.989 "trsvcid": "4421" 00:13:01.989 }, 00:13:01.989 "method": "nvmf_subsystem_remove_listener", 00:13:01.989 "req_id": 1 00:13:01.989 } 00:13:01.989 Got JSON-RPC error response 00:13:01.989 response: 00:13:01.989 { 00:13:01.989 "code": -32602, 00:13:01.989 "message": "Invalid parameters" 00:13:01.989 }' 00:13:01.989 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:01.989 { 00:13:01.989 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:01.989 "listen_address": { 00:13:01.989 "trtype": "tcp", 00:13:01.989 "traddr": "", 00:13:01.989 "trsvcid": "4421" 00:13:01.989 }, 00:13:01.989 "method": "nvmf_subsystem_remove_listener", 00:13:01.989 "req_id": 1 00:13:01.989 } 00:13:01.989 Got JSON-RPC error response 00:13:01.989 response: 00:13:01.989 { 00:13:01.989 "code": -32602, 00:13:01.989 "message": "Invalid parameters" 00:13:01.989 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:01.989 18:44:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode631 -i 0 00:13:01.989 [2024-07-14 18:44:50.207317] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode631: invalid cntlid range [0-65519] 00:13:02.247 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:02.247 { 00:13:02.247 "nqn": "nqn.2016-06.io.spdk:cnode631", 00:13:02.247 "min_cntlid": 0, 00:13:02.247 "method": "nvmf_create_subsystem", 00:13:02.247 "req_id": 1 00:13:02.247 } 00:13:02.247 Got JSON-RPC error response 00:13:02.247 response: 00:13:02.247 { 00:13:02.247 "code": -32602, 00:13:02.247 "message": "Invalid cntlid range [0-65519]" 00:13:02.247 }' 00:13:02.247 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:02.247 { 00:13:02.247 "nqn": "nqn.2016-06.io.spdk:cnode631", 00:13:02.247 "min_cntlid": 0, 00:13:02.247 "method": "nvmf_create_subsystem", 00:13:02.247 "req_id": 1 00:13:02.247 } 00:13:02.247 Got JSON-RPC error response 00:13:02.247 response: 00:13:02.247 { 00:13:02.247 "code": -32602, 00:13:02.247 "message": "Invalid cntlid range [0-65519]" 00:13:02.247 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.247 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7099 -i 65520 00:13:02.247 [2024-07-14 18:44:50.460109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7099: invalid cntlid range [65520-65519] 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:02.506 { 00:13:02.506 "nqn": "nqn.2016-06.io.spdk:cnode7099", 00:13:02.506 "min_cntlid": 65520, 00:13:02.506 "method": "nvmf_create_subsystem", 00:13:02.506 "req_id": 1 00:13:02.506 } 00:13:02.506 Got JSON-RPC error response 00:13:02.506 response: 00:13:02.506 { 00:13:02.506 "code": -32602, 00:13:02.506 "message": "Invalid cntlid range [65520-65519]" 00:13:02.506 }' 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:02.506 { 00:13:02.506 "nqn": "nqn.2016-06.io.spdk:cnode7099", 00:13:02.506 "min_cntlid": 65520, 00:13:02.506 "method": "nvmf_create_subsystem", 00:13:02.506 "req_id": 1 00:13:02.506 } 00:13:02.506 Got JSON-RPC error response 00:13:02.506 response: 00:13:02.506 { 00:13:02.506 "code": -32602, 00:13:02.506 "message": "Invalid cntlid range [65520-65519]" 00:13:02.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7626 -I 0 00:13:02.506 [2024-07-14 18:44:50.700934] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7626: invalid cntlid range [1-0] 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:02.506 { 00:13:02.506 "nqn": "nqn.2016-06.io.spdk:cnode7626", 00:13:02.506 "max_cntlid": 0, 00:13:02.506 "method": "nvmf_create_subsystem", 00:13:02.506 "req_id": 1 00:13:02.506 } 00:13:02.506 Got JSON-RPC error response 00:13:02.506 response: 00:13:02.506 { 00:13:02.506 "code": -32602, 00:13:02.506 "message": "Invalid cntlid range [1-0]" 00:13:02.506 }' 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:02.506 { 00:13:02.506 "nqn": "nqn.2016-06.io.spdk:cnode7626", 00:13:02.506 "max_cntlid": 0, 00:13:02.506 "method": "nvmf_create_subsystem", 00:13:02.506 "req_id": 1 00:13:02.506 } 00:13:02.506 Got JSON-RPC error response 00:13:02.506 response: 00:13:02.506 { 00:13:02.506 "code": -32602, 00:13:02.506 "message": "Invalid cntlid range [1-0]" 00:13:02.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.506 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9553 -I 65520 00:13:02.765 [2024-07-14 18:44:50.961833] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9553: invalid cntlid range [1-65520] 00:13:02.765 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:02.765 { 00:13:02.765 "nqn": "nqn.2016-06.io.spdk:cnode9553", 00:13:02.765 "max_cntlid": 65520, 00:13:02.765 "method": "nvmf_create_subsystem", 00:13:02.765 "req_id": 1 00:13:02.765 } 00:13:02.765 Got JSON-RPC error response 00:13:02.765 response: 00:13:02.765 { 00:13:02.765 "code": -32602, 00:13:02.765 "message": "Invalid cntlid range [1-65520]" 00:13:02.765 }' 00:13:02.765 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:02.765 { 00:13:02.765 "nqn": "nqn.2016-06.io.spdk:cnode9553", 00:13:02.765 "max_cntlid": 65520, 00:13:02.765 "method": "nvmf_create_subsystem", 00:13:02.765 "req_id": 1 00:13:02.765 } 00:13:02.765 Got JSON-RPC error response 00:13:02.765 response: 00:13:02.765 { 00:13:02.765 "code": -32602, 00:13:02.765 "message": "Invalid cntlid range [1-65520]" 00:13:02.765 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:02.765 18:44:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32180 -i 6 -I 5 00:13:03.024 [2024-07-14 18:44:51.206625] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32180: invalid cntlid range [6-5] 00:13:03.024 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:03.024 { 00:13:03.024 "nqn": "nqn.2016-06.io.spdk:cnode32180", 00:13:03.024 "min_cntlid": 6, 00:13:03.024 "max_cntlid": 5, 00:13:03.024 "method": "nvmf_create_subsystem", 00:13:03.024 "req_id": 1 00:13:03.024 } 00:13:03.024 Got JSON-RPC error response 00:13:03.024 response: 00:13:03.024 { 00:13:03.024 "code": -32602, 00:13:03.024 "message": "Invalid cntlid range [6-5]" 00:13:03.024 }' 00:13:03.024 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:03.024 { 00:13:03.024 "nqn": "nqn.2016-06.io.spdk:cnode32180", 00:13:03.024 "min_cntlid": 6, 00:13:03.024 "max_cntlid": 5, 00:13:03.024 "method": "nvmf_create_subsystem", 00:13:03.024 "req_id": 1 00:13:03.024 } 00:13:03.024 Got JSON-RPC error response 00:13:03.024 response: 00:13:03.024 { 00:13:03.024 "code": -32602, 00:13:03.024 "message": "Invalid cntlid range [6-5]" 00:13:03.024 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.024 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:03.283 { 00:13:03.283 "name": "foobar", 00:13:03.283 "method": "nvmf_delete_target", 00:13:03.283 "req_id": 1 00:13:03.283 } 00:13:03.283 Got JSON-RPC error response 00:13:03.283 response: 00:13:03.283 { 00:13:03.283 "code": -32602, 00:13:03.283 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:03.283 }' 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:03.283 { 00:13:03.283 "name": "foobar", 00:13:03.283 "method": "nvmf_delete_target", 00:13:03.283 "req_id": 1 00:13:03.283 } 00:13:03.283 Got JSON-RPC error response 00:13:03.283 response: 00:13:03.283 { 00:13:03.283 "code": -32602, 00:13:03.283 "message": "The specified target doesn't exist, cannot delete it." 00:13:03.283 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.283 rmmod nvme_tcp 00:13:03.283 rmmod nvme_fabrics 00:13:03.283 rmmod nvme_keyring 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3531680 ']' 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3531680 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3531680 ']' 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3531680 00:13:03.283 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3531680 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3531680' 00:13:03.284 killing process with pid 3531680 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3531680 00:13:03.284 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3531680 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.542 18:44:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.078 18:44:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.078 00:13:06.078 real 0m8.516s 00:13:06.078 user 0m20.001s 00:13:06.078 sys 0m2.362s 00:13:06.078 18:44:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:06.078 18:44:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:06.078 ************************************ 00:13:06.078 END TEST nvmf_invalid 00:13:06.078 ************************************ 00:13:06.078 18:44:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:06.078 18:44:53 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.078 18:44:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:06.078 18:44:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.078 18:44:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.078 ************************************ 00:13:06.078 START TEST nvmf_abort 00:13:06.078 ************************************ 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.078 * Looking for test storage... 00:13:06.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.078 18:44:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:07.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:07.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:07.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:07.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:07.987 00:13:07.987 --- 10.0.0.2 ping statistics --- 00:13:07.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.987 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:13:07.987 00:13:07.987 --- 10.0.0.1 ping statistics --- 00:13:07.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.987 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:07.987 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3534315 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3534315 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3534315 ']' 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.988 18:44:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.988 [2024-07-14 18:44:56.033701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:07.988 [2024-07-14 18:44:56.033768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.988 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.988 [2024-07-14 18:44:56.101675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:07.988 [2024-07-14 18:44:56.199748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.988 [2024-07-14 18:44:56.199821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.988 [2024-07-14 18:44:56.199838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.988 [2024-07-14 18:44:56.199852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.988 [2024-07-14 18:44:56.199863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.988 [2024-07-14 18:44:56.199943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.988 [2024-07-14 18:44:56.199997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.988 [2024-07-14 18:44:56.199999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 [2024-07-14 18:44:56.353401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 Malloc0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 Delay0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 [2024-07-14 18:44:56.425012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.247 18:44:56 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:08.247 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.505 [2024-07-14 18:44:56.529870] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:11.041 Initializing NVMe Controllers 00:13:11.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:11.041 controller IO queue size 128 less than required 00:13:11.041 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:11.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:11.041 Initialization complete. Launching workers. 00:13:11.041 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33489 00:13:11.041 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33550, failed to submit 62 00:13:11.041 success 33493, unsuccess 57, failed 0 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.041 rmmod nvme_tcp 00:13:11.041 rmmod nvme_fabrics 00:13:11.041 rmmod nvme_keyring 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3534315 ']' 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3534315 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3534315 ']' 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3534315 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3534315 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3534315' 00:13:11.041 killing process with pid 3534315 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3534315 00:13:11.041 18:44:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3534315 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.041 18:44:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.945 18:45:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:12.945 00:13:12.945 real 0m7.295s 00:13:12.945 user 0m10.685s 00:13:12.945 sys 0m2.504s 00:13:12.945 18:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.945 18:45:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:12.945 ************************************ 00:13:12.945 END TEST nvmf_abort 00:13:12.945 ************************************ 00:13:12.945 18:45:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.945 18:45:01 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:12.945 18:45:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:12.945 18:45:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.945 18:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:12.945 ************************************ 00:13:12.945 START TEST nvmf_ns_hotplug_stress 00:13:12.945 ************************************ 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:12.945 * Looking for test storage... 00:13:12.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:12.945 18:45:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:15.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.477 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:15.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:15.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:15.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:15.478 00:13:15.478 --- 10.0.0.2 ping statistics --- 00:13:15.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.478 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:13:15.478 00:13:15.478 --- 10.0.0.1 ping statistics --- 00:13:15.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.478 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3536644 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3536644 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3536644 ']' 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.478 [2024-07-14 18:45:03.308893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:15.478 [2024-07-14 18:45:03.309009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.478 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.478 [2024-07-14 18:45:03.375252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.478 [2024-07-14 18:45:03.468021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.478 [2024-07-14 18:45:03.468078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.478 [2024-07-14 18:45:03.468107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.478 [2024-07-14 18:45:03.468120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.478 [2024-07-14 18:45:03.468130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.478 [2024-07-14 18:45:03.468281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.478 [2024-07-14 18:45:03.468334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.478 [2024-07-14 18:45:03.468337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.478 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.479 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.479 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.479 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:15.479 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.736 [2024-07-14 18:45:03.835129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.736 18:45:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:15.994 18:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.252 [2024-07-14 18:45:04.426487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.252 18:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.511 18:45:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:17.077 Malloc0 00:13:17.077 18:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:17.335 Delay0 00:13:17.335 18:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.599 18:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:17.903 NULL1 00:13:17.903 18:45:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:18.161 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3537068 00:13:18.161 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:18.161 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:18.161 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.161 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.419 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.677 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:18.677 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:18.935 true 00:13:18.935 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:18.935 18:45:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.194 18:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.452 18:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:19.452 18:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:19.710 true 00:13:19.710 18:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:19.710 18:45:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.968 18:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.226 18:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:20.226 18:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:20.484 true 00:13:20.484 18:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:20.484 18:45:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.420 Read completed with error (sct=0, sc=11) 00:13:21.420 18:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.678 18:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:21.678 18:45:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:21.936 true 00:13:21.936 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:21.936 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.194 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.452 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:22.452 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:22.710 true 00:13:22.710 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:22.710 18:45:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.648 18:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.648 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.906 18:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:23.906 18:45:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:24.165 true 00:13:24.165 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:24.165 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.425 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.685 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:24.685 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:24.685 true 00:13:24.944 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:24.944 18:45:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.878 18:45:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.135 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:26.135 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:26.394 true 00:13:26.394 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:26.394 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.654 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.654 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:26.654 18:45:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:26.914 true 00:13:27.174 18:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:27.174 18:45:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.110 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.110 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:28.110 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:28.368 true 00:13:28.368 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:28.368 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.626 18:45:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.884 18:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:28.884 18:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:29.142 true 00:13:29.142 18:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:29.142 18:45:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.079 18:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.645 18:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:30.645 18:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:30.645 true 00:13:30.645 18:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:30.645 18:45:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.903 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.160 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:31.160 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:31.416 true 00:13:31.416 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:31.416 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.674 18:45:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.932 18:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:31.932 18:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:32.189 true 00:13:32.189 18:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:32.189 18:45:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.123 18:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.381 18:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:33.382 18:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:33.640 true 00:13:33.640 18:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:33.640 18:45:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.925 18:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.182 18:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:34.182 18:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:34.439 true 00:13:34.440 18:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:34.440 18:45:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.372 18:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.372 18:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:35.372 18:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:35.629 true 00:13:35.629 18:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:35.629 18:45:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.887 18:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.145 18:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:36.145 18:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:36.425 true 00:13:36.425 18:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:36.425 18:45:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.360 18:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.618 18:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:37.618 18:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:37.888 true 00:13:37.888 18:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:37.888 18:45:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.144 18:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.402 18:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:38.402 18:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:38.402 true 00:13:38.660 18:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:38.660 18:45:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.594 18:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.594 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.852 18:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:39.852 18:45:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:40.110 true 00:13:40.110 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:40.110 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.368 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.626 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:40.626 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:40.884 true 00:13:40.884 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:40.884 18:45:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.820 18:45:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.820 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:41.820 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:42.079 true 00:13:42.079 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:42.079 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.338 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.596 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:42.596 18:45:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:42.854 true 00:13:42.854 18:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:42.854 18:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:43.791 18:45:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.049 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:44.049 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:44.307 true 00:13:44.307 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:44.307 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.566 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.824 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:44.824 18:45:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:45.082 true 00:13:45.082 18:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:45.082 18:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.013 18:45:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.272 18:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:46.272 18:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:46.272 true 00:13:46.272 18:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:46.272 18:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.839 18:45:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.839 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:46.839 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:47.097 true 00:13:47.097 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:47.097 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.355 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.613 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:47.613 18:45:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:47.871 true 00:13:47.871 18:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:47.871 18:45:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.248 Initializing NVMe Controllers 00:13:49.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.248 Controller IO queue size 128, less than required. 00:13:49.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:49.248 Controller IO queue size 128, less than required. 00:13:49.248 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:49.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:49.248 Initialization complete. Launching workers. 00:13:49.248 ======================================================== 00:13:49.248 Latency(us) 00:13:49.248 Device Information : IOPS MiB/s Average min max 00:13:49.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 580.50 0.28 105612.39 2976.49 1011895.31 00:13:49.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9941.29 4.85 12837.92 3634.67 370810.58 00:13:49.248 ======================================================== 00:13:49.248 Total : 10521.78 5.14 17956.38 2976.49 1011895.31 00:13:49.248 00:13:49.248 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.248 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:49.248 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:49.507 true 00:13:49.507 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3537068 00:13:49.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3537068) - No such process 00:13:49.507 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3537068 00:13:49.507 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.764 18:45:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.023 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:50.023 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:50.023 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:50.023 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.023 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:50.311 null0 00:13:50.311 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.311 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.311 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:50.568 null1 00:13:50.568 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.568 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.569 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:50.826 null2 00:13:50.826 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.826 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.826 18:45:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:51.084 null3 00:13:51.084 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.084 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.084 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:51.342 null4 00:13:51.342 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.342 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.342 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:51.600 null5 00:13:51.600 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.600 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.600 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:51.858 null6 00:13:51.858 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:51.858 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:51.858 18:45:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:52.116 null7 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.116 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3541630 3541631 3541633 3541635 3541637 3541639 3541641 3541643 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.117 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.376 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.634 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.635 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.635 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.635 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.635 18:45:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.895 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.159 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.160 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.418 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.676 18:45:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.934 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.193 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.451 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.709 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.968 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.224 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.225 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.482 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.741 18:45:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.999 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.258 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.516 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.773 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.773 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.773 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.774 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.774 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.774 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.774 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.774 18:45:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.032 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.289 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:57.547 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.548 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.548 rmmod nvme_tcp 00:13:57.807 rmmod nvme_fabrics 00:13:57.807 rmmod nvme_keyring 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3536644 ']' 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3536644 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3536644 ']' 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3536644 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3536644 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3536644' 00:13:57.807 killing process with pid 3536644 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3536644 00:13:57.807 18:45:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3536644 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.065 18:45:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.965 18:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.965 00:13:59.965 real 0m47.053s 00:13:59.965 user 3m35.189s 00:13:59.965 sys 0m16.142s 00:13:59.965 18:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.965 18:45:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.965 ************************************ 00:13:59.965 END TEST nvmf_ns_hotplug_stress 00:13:59.965 ************************************ 00:13:59.965 18:45:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:59.965 18:45:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:59.965 18:45:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:59.965 18:45:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.965 18:45:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.224 ************************************ 00:14:00.224 START TEST nvmf_connect_stress 00:14:00.224 ************************************ 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:00.224 * Looking for test storage... 00:14:00.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.224 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.225 18:45:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:02.130 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:02.130 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:02.130 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:02.130 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.130 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:14:02.131 00:14:02.131 --- 10.0.0.2 ping statistics --- 00:14:02.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.131 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:02.131 00:14:02.131 --- 10.0.0.1 ping statistics --- 00:14:02.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.131 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.131 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3544384 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3544384 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3544384 ']' 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.389 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.389 [2024-07-14 18:45:50.428555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:02.389 [2024-07-14 18:45:50.428640] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.389 [2024-07-14 18:45:50.498236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.389 [2024-07-14 18:45:50.587984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.389 [2024-07-14 18:45:50.588050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.389 [2024-07-14 18:45:50.588067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.389 [2024-07-14 18:45:50.588080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.389 [2024-07-14 18:45:50.588092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.389 [2024-07-14 18:45:50.588209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.389 [2024-07-14 18:45:50.588301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.389 [2024-07-14 18:45:50.588304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 [2024-07-14 18:45:50.719375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 [2024-07-14 18:45:50.744041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.647 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.648 NULL1 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3544431 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.648 18:45:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.905 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.906 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:02.906 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.906 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.906 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.470 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.470 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:03.470 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.470 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.470 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.728 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.728 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:03.728 18:45:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.728 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.728 18:45:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.985 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.985 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:03.985 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.985 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.985 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.243 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.243 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:04.243 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.243 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.243 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.808 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.808 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:04.808 18:45:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.808 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.808 18:45:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.065 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.065 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:05.065 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.065 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.065 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.322 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.322 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:05.322 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.322 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.322 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.579 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.579 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:05.579 18:45:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.579 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.579 18:45:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.837 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.837 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:05.837 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.837 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.837 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.436 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.436 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:06.436 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.436 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.436 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.694 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.694 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:06.694 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.694 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.694 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.951 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.951 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:06.951 18:45:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.951 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.951 18:45:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.228 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.228 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:07.228 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.228 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.228 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.486 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.486 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:07.486 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.486 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.486 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.744 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.744 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:07.744 18:45:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.744 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.744 18:45:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.309 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.309 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:08.309 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.309 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.309 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.567 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.567 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:08.567 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.567 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.567 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.824 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.824 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:08.824 18:45:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.824 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.824 18:45:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.082 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.082 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:09.082 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.082 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.082 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.340 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.340 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:09.340 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.340 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.340 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.906 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.906 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:09.906 18:45:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.906 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.906 18:45:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.164 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.164 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:10.164 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.164 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.164 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.421 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.421 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:10.421 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.421 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.421 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.679 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.679 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:10.679 18:45:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.679 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.679 18:45:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.937 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.937 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:10.937 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.937 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.937 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.502 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.502 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:11.502 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.502 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.502 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.760 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.760 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:11.760 18:45:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.760 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.760 18:45:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.018 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.018 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:12.018 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.018 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.018 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.275 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.275 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:12.275 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.275 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.275 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.533 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.533 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:12.533 18:46:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.533 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.791 18:46:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.791 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3544431 00:14:13.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3544431) - No such process 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3544431 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.049 rmmod nvme_tcp 00:14:13.049 rmmod nvme_fabrics 00:14:13.049 rmmod nvme_keyring 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3544384 ']' 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3544384 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3544384 ']' 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3544384 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3544384 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3544384' 00:14:13.049 killing process with pid 3544384 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3544384 00:14:13.049 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3544384 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.308 18:46:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.208 18:46:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.208 00:14:15.208 real 0m15.236s 00:14:15.208 user 0m38.263s 00:14:15.208 sys 0m5.754s 00:14:15.466 18:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:15.466 18:46:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.466 ************************************ 00:14:15.466 END TEST nvmf_connect_stress 00:14:15.466 ************************************ 00:14:15.466 18:46:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:15.466 18:46:03 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.466 18:46:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:15.466 18:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.466 18:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.466 ************************************ 00:14:15.466 START TEST nvmf_fused_ordering 00:14:15.466 ************************************ 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.466 * Looking for test storage... 00:14:15.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.466 18:46:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.364 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.365 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.365 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.365 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:14:17.623 00:14:17.623 --- 10.0.0.2 ping statistics --- 00:14:17.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.623 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:14:17.623 00:14:17.623 --- 10.0.0.1 ping statistics --- 00:14:17.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.623 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3547674 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3547674 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3547674 ']' 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.623 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.623 [2024-07-14 18:46:05.712447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:17.623 [2024-07-14 18:46:05.712533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.623 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.623 [2024-07-14 18:46:05.781942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.882 [2024-07-14 18:46:05.871434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.882 [2024-07-14 18:46:05.871489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.882 [2024-07-14 18:46:05.871506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.882 [2024-07-14 18:46:05.871519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.882 [2024-07-14 18:46:05.871531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.882 [2024-07-14 18:46:05.871561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.882 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.882 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:17.882 18:46:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.882 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.882 18:46:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 [2024-07-14 18:46:06.005048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 [2024-07-14 18:46:06.021243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 NULL1 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.882 18:46:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:17.882 [2024-07-14 18:46:06.066008] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:17.882 [2024-07-14 18:46:06.066052] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547698 ] 00:14:17.882 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.447 Attached to nqn.2016-06.io.spdk:cnode1 00:14:18.447 Namespace ID: 1 size: 1GB 00:14:18.447 fused_ordering(0) 00:14:18.447 fused_ordering(1) 00:14:18.447 fused_ordering(2) 00:14:18.447 fused_ordering(3) 00:14:18.447 fused_ordering(4) 00:14:18.448 fused_ordering(5) 00:14:18.448 fused_ordering(6) 00:14:18.448 fused_ordering(7) 00:14:18.448 fused_ordering(8) 00:14:18.448 fused_ordering(9) 00:14:18.448 fused_ordering(10) 00:14:18.448 fused_ordering(11) 00:14:18.448 fused_ordering(12) 00:14:18.448 fused_ordering(13) 00:14:18.448 fused_ordering(14) 00:14:18.448 fused_ordering(15) 00:14:18.448 fused_ordering(16) 00:14:18.448 fused_ordering(17) 00:14:18.448 fused_ordering(18) 00:14:18.448 fused_ordering(19) 00:14:18.448 fused_ordering(20) 00:14:18.448 fused_ordering(21) 00:14:18.448 fused_ordering(22) 00:14:18.448 fused_ordering(23) 00:14:18.448 fused_ordering(24) 00:14:18.448 fused_ordering(25) 00:14:18.448 fused_ordering(26) 00:14:18.448 fused_ordering(27) 00:14:18.448 fused_ordering(28) 00:14:18.448 fused_ordering(29) 00:14:18.448 fused_ordering(30) 00:14:18.448 fused_ordering(31) 00:14:18.448 fused_ordering(32) 00:14:18.448 fused_ordering(33) 00:14:18.448 fused_ordering(34) 00:14:18.448 fused_ordering(35) 00:14:18.448 fused_ordering(36) 00:14:18.448 fused_ordering(37) 00:14:18.448 fused_ordering(38) 00:14:18.448 fused_ordering(39) 00:14:18.448 fused_ordering(40) 00:14:18.448 fused_ordering(41) 00:14:18.448 fused_ordering(42) 00:14:18.448 fused_ordering(43) 00:14:18.448 fused_ordering(44) 00:14:18.448 fused_ordering(45) 00:14:18.448 fused_ordering(46) 00:14:18.448 fused_ordering(47) 00:14:18.448 fused_ordering(48) 00:14:18.448 fused_ordering(49) 00:14:18.448 fused_ordering(50) 00:14:18.448 fused_ordering(51) 00:14:18.448 fused_ordering(52) 00:14:18.448 fused_ordering(53) 00:14:18.448 fused_ordering(54) 00:14:18.448 fused_ordering(55) 00:14:18.448 fused_ordering(56) 00:14:18.448 fused_ordering(57) 00:14:18.448 fused_ordering(58) 00:14:18.448 fused_ordering(59) 00:14:18.448 fused_ordering(60) 00:14:18.448 fused_ordering(61) 00:14:18.448 fused_ordering(62) 00:14:18.448 fused_ordering(63) 00:14:18.448 fused_ordering(64) 00:14:18.448 fused_ordering(65) 00:14:18.448 fused_ordering(66) 00:14:18.448 fused_ordering(67) 00:14:18.448 fused_ordering(68) 00:14:18.448 fused_ordering(69) 00:14:18.448 fused_ordering(70) 00:14:18.448 fused_ordering(71) 00:14:18.448 fused_ordering(72) 00:14:18.448 fused_ordering(73) 00:14:18.448 fused_ordering(74) 00:14:18.448 fused_ordering(75) 00:14:18.448 fused_ordering(76) 00:14:18.448 fused_ordering(77) 00:14:18.448 fused_ordering(78) 00:14:18.448 fused_ordering(79) 00:14:18.448 fused_ordering(80) 00:14:18.448 fused_ordering(81) 00:14:18.448 fused_ordering(82) 00:14:18.448 fused_ordering(83) 00:14:18.448 fused_ordering(84) 00:14:18.448 fused_ordering(85) 00:14:18.448 fused_ordering(86) 00:14:18.448 fused_ordering(87) 00:14:18.448 fused_ordering(88) 00:14:18.448 fused_ordering(89) 00:14:18.448 fused_ordering(90) 00:14:18.448 fused_ordering(91) 00:14:18.448 fused_ordering(92) 00:14:18.448 fused_ordering(93) 00:14:18.448 fused_ordering(94) 00:14:18.448 fused_ordering(95) 00:14:18.448 fused_ordering(96) 00:14:18.448 fused_ordering(97) 00:14:18.448 fused_ordering(98) 00:14:18.448 fused_ordering(99) 00:14:18.448 fused_ordering(100) 00:14:18.448 fused_ordering(101) 00:14:18.448 fused_ordering(102) 00:14:18.448 fused_ordering(103) 00:14:18.448 fused_ordering(104) 00:14:18.448 fused_ordering(105) 00:14:18.448 fused_ordering(106) 00:14:18.448 fused_ordering(107) 00:14:18.448 fused_ordering(108) 00:14:18.448 fused_ordering(109) 00:14:18.448 fused_ordering(110) 00:14:18.448 fused_ordering(111) 00:14:18.448 fused_ordering(112) 00:14:18.448 fused_ordering(113) 00:14:18.448 fused_ordering(114) 00:14:18.448 fused_ordering(115) 00:14:18.448 fused_ordering(116) 00:14:18.448 fused_ordering(117) 00:14:18.448 fused_ordering(118) 00:14:18.448 fused_ordering(119) 00:14:18.448 fused_ordering(120) 00:14:18.448 fused_ordering(121) 00:14:18.448 fused_ordering(122) 00:14:18.448 fused_ordering(123) 00:14:18.448 fused_ordering(124) 00:14:18.448 fused_ordering(125) 00:14:18.448 fused_ordering(126) 00:14:18.448 fused_ordering(127) 00:14:18.448 fused_ordering(128) 00:14:18.448 fused_ordering(129) 00:14:18.448 fused_ordering(130) 00:14:18.448 fused_ordering(131) 00:14:18.448 fused_ordering(132) 00:14:18.448 fused_ordering(133) 00:14:18.448 fused_ordering(134) 00:14:18.448 fused_ordering(135) 00:14:18.448 fused_ordering(136) 00:14:18.448 fused_ordering(137) 00:14:18.448 fused_ordering(138) 00:14:18.448 fused_ordering(139) 00:14:18.448 fused_ordering(140) 00:14:18.448 fused_ordering(141) 00:14:18.448 fused_ordering(142) 00:14:18.448 fused_ordering(143) 00:14:18.448 fused_ordering(144) 00:14:18.448 fused_ordering(145) 00:14:18.448 fused_ordering(146) 00:14:18.448 fused_ordering(147) 00:14:18.448 fused_ordering(148) 00:14:18.448 fused_ordering(149) 00:14:18.448 fused_ordering(150) 00:14:18.448 fused_ordering(151) 00:14:18.448 fused_ordering(152) 00:14:18.448 fused_ordering(153) 00:14:18.448 fused_ordering(154) 00:14:18.448 fused_ordering(155) 00:14:18.448 fused_ordering(156) 00:14:18.448 fused_ordering(157) 00:14:18.448 fused_ordering(158) 00:14:18.448 fused_ordering(159) 00:14:18.448 fused_ordering(160) 00:14:18.448 fused_ordering(161) 00:14:18.448 fused_ordering(162) 00:14:18.448 fused_ordering(163) 00:14:18.448 fused_ordering(164) 00:14:18.448 fused_ordering(165) 00:14:18.448 fused_ordering(166) 00:14:18.448 fused_ordering(167) 00:14:18.448 fused_ordering(168) 00:14:18.448 fused_ordering(169) 00:14:18.448 fused_ordering(170) 00:14:18.448 fused_ordering(171) 00:14:18.448 fused_ordering(172) 00:14:18.448 fused_ordering(173) 00:14:18.448 fused_ordering(174) 00:14:18.448 fused_ordering(175) 00:14:18.448 fused_ordering(176) 00:14:18.448 fused_ordering(177) 00:14:18.448 fused_ordering(178) 00:14:18.448 fused_ordering(179) 00:14:18.448 fused_ordering(180) 00:14:18.448 fused_ordering(181) 00:14:18.448 fused_ordering(182) 00:14:18.448 fused_ordering(183) 00:14:18.448 fused_ordering(184) 00:14:18.448 fused_ordering(185) 00:14:18.448 fused_ordering(186) 00:14:18.448 fused_ordering(187) 00:14:18.448 fused_ordering(188) 00:14:18.448 fused_ordering(189) 00:14:18.448 fused_ordering(190) 00:14:18.448 fused_ordering(191) 00:14:18.448 fused_ordering(192) 00:14:18.448 fused_ordering(193) 00:14:18.448 fused_ordering(194) 00:14:18.448 fused_ordering(195) 00:14:18.448 fused_ordering(196) 00:14:18.448 fused_ordering(197) 00:14:18.448 fused_ordering(198) 00:14:18.448 fused_ordering(199) 00:14:18.448 fused_ordering(200) 00:14:18.448 fused_ordering(201) 00:14:18.448 fused_ordering(202) 00:14:18.448 fused_ordering(203) 00:14:18.448 fused_ordering(204) 00:14:18.448 fused_ordering(205) 00:14:18.705 fused_ordering(206) 00:14:18.705 fused_ordering(207) 00:14:18.705 fused_ordering(208) 00:14:18.705 fused_ordering(209) 00:14:18.705 fused_ordering(210) 00:14:18.705 fused_ordering(211) 00:14:18.705 fused_ordering(212) 00:14:18.705 fused_ordering(213) 00:14:18.705 fused_ordering(214) 00:14:18.705 fused_ordering(215) 00:14:18.705 fused_ordering(216) 00:14:18.705 fused_ordering(217) 00:14:18.705 fused_ordering(218) 00:14:18.705 fused_ordering(219) 00:14:18.705 fused_ordering(220) 00:14:18.705 fused_ordering(221) 00:14:18.705 fused_ordering(222) 00:14:18.705 fused_ordering(223) 00:14:18.705 fused_ordering(224) 00:14:18.705 fused_ordering(225) 00:14:18.705 fused_ordering(226) 00:14:18.705 fused_ordering(227) 00:14:18.705 fused_ordering(228) 00:14:18.705 fused_ordering(229) 00:14:18.705 fused_ordering(230) 00:14:18.705 fused_ordering(231) 00:14:18.705 fused_ordering(232) 00:14:18.705 fused_ordering(233) 00:14:18.705 fused_ordering(234) 00:14:18.705 fused_ordering(235) 00:14:18.705 fused_ordering(236) 00:14:18.705 fused_ordering(237) 00:14:18.705 fused_ordering(238) 00:14:18.705 fused_ordering(239) 00:14:18.705 fused_ordering(240) 00:14:18.705 fused_ordering(241) 00:14:18.705 fused_ordering(242) 00:14:18.705 fused_ordering(243) 00:14:18.705 fused_ordering(244) 00:14:18.705 fused_ordering(245) 00:14:18.705 fused_ordering(246) 00:14:18.705 fused_ordering(247) 00:14:18.705 fused_ordering(248) 00:14:18.705 fused_ordering(249) 00:14:18.705 fused_ordering(250) 00:14:18.705 fused_ordering(251) 00:14:18.705 fused_ordering(252) 00:14:18.705 fused_ordering(253) 00:14:18.705 fused_ordering(254) 00:14:18.705 fused_ordering(255) 00:14:18.705 fused_ordering(256) 00:14:18.705 fused_ordering(257) 00:14:18.705 fused_ordering(258) 00:14:18.705 fused_ordering(259) 00:14:18.705 fused_ordering(260) 00:14:18.705 fused_ordering(261) 00:14:18.705 fused_ordering(262) 00:14:18.705 fused_ordering(263) 00:14:18.705 fused_ordering(264) 00:14:18.705 fused_ordering(265) 00:14:18.705 fused_ordering(266) 00:14:18.705 fused_ordering(267) 00:14:18.705 fused_ordering(268) 00:14:18.705 fused_ordering(269) 00:14:18.705 fused_ordering(270) 00:14:18.705 fused_ordering(271) 00:14:18.705 fused_ordering(272) 00:14:18.705 fused_ordering(273) 00:14:18.705 fused_ordering(274) 00:14:18.705 fused_ordering(275) 00:14:18.705 fused_ordering(276) 00:14:18.705 fused_ordering(277) 00:14:18.705 fused_ordering(278) 00:14:18.705 fused_ordering(279) 00:14:18.705 fused_ordering(280) 00:14:18.705 fused_ordering(281) 00:14:18.705 fused_ordering(282) 00:14:18.705 fused_ordering(283) 00:14:18.705 fused_ordering(284) 00:14:18.705 fused_ordering(285) 00:14:18.705 fused_ordering(286) 00:14:18.705 fused_ordering(287) 00:14:18.705 fused_ordering(288) 00:14:18.705 fused_ordering(289) 00:14:18.705 fused_ordering(290) 00:14:18.705 fused_ordering(291) 00:14:18.705 fused_ordering(292) 00:14:18.705 fused_ordering(293) 00:14:18.705 fused_ordering(294) 00:14:18.705 fused_ordering(295) 00:14:18.705 fused_ordering(296) 00:14:18.705 fused_ordering(297) 00:14:18.705 fused_ordering(298) 00:14:18.705 fused_ordering(299) 00:14:18.705 fused_ordering(300) 00:14:18.705 fused_ordering(301) 00:14:18.705 fused_ordering(302) 00:14:18.705 fused_ordering(303) 00:14:18.705 fused_ordering(304) 00:14:18.705 fused_ordering(305) 00:14:18.705 fused_ordering(306) 00:14:18.705 fused_ordering(307) 00:14:18.705 fused_ordering(308) 00:14:18.705 fused_ordering(309) 00:14:18.705 fused_ordering(310) 00:14:18.705 fused_ordering(311) 00:14:18.705 fused_ordering(312) 00:14:18.705 fused_ordering(313) 00:14:18.705 fused_ordering(314) 00:14:18.705 fused_ordering(315) 00:14:18.705 fused_ordering(316) 00:14:18.705 fused_ordering(317) 00:14:18.705 fused_ordering(318) 00:14:18.705 fused_ordering(319) 00:14:18.705 fused_ordering(320) 00:14:18.705 fused_ordering(321) 00:14:18.705 fused_ordering(322) 00:14:18.705 fused_ordering(323) 00:14:18.705 fused_ordering(324) 00:14:18.705 fused_ordering(325) 00:14:18.705 fused_ordering(326) 00:14:18.705 fused_ordering(327) 00:14:18.705 fused_ordering(328) 00:14:18.705 fused_ordering(329) 00:14:18.705 fused_ordering(330) 00:14:18.705 fused_ordering(331) 00:14:18.705 fused_ordering(332) 00:14:18.705 fused_ordering(333) 00:14:18.705 fused_ordering(334) 00:14:18.705 fused_ordering(335) 00:14:18.705 fused_ordering(336) 00:14:18.705 fused_ordering(337) 00:14:18.705 fused_ordering(338) 00:14:18.705 fused_ordering(339) 00:14:18.705 fused_ordering(340) 00:14:18.705 fused_ordering(341) 00:14:18.705 fused_ordering(342) 00:14:18.705 fused_ordering(343) 00:14:18.705 fused_ordering(344) 00:14:18.705 fused_ordering(345) 00:14:18.705 fused_ordering(346) 00:14:18.705 fused_ordering(347) 00:14:18.705 fused_ordering(348) 00:14:18.705 fused_ordering(349) 00:14:18.705 fused_ordering(350) 00:14:18.705 fused_ordering(351) 00:14:18.705 fused_ordering(352) 00:14:18.705 fused_ordering(353) 00:14:18.705 fused_ordering(354) 00:14:18.705 fused_ordering(355) 00:14:18.705 fused_ordering(356) 00:14:18.705 fused_ordering(357) 00:14:18.705 fused_ordering(358) 00:14:18.705 fused_ordering(359) 00:14:18.705 fused_ordering(360) 00:14:18.705 fused_ordering(361) 00:14:18.705 fused_ordering(362) 00:14:18.705 fused_ordering(363) 00:14:18.705 fused_ordering(364) 00:14:18.705 fused_ordering(365) 00:14:18.705 fused_ordering(366) 00:14:18.705 fused_ordering(367) 00:14:18.705 fused_ordering(368) 00:14:18.705 fused_ordering(369) 00:14:18.705 fused_ordering(370) 00:14:18.705 fused_ordering(371) 00:14:18.705 fused_ordering(372) 00:14:18.705 fused_ordering(373) 00:14:18.705 fused_ordering(374) 00:14:18.705 fused_ordering(375) 00:14:18.705 fused_ordering(376) 00:14:18.705 fused_ordering(377) 00:14:18.706 fused_ordering(378) 00:14:18.706 fused_ordering(379) 00:14:18.706 fused_ordering(380) 00:14:18.706 fused_ordering(381) 00:14:18.706 fused_ordering(382) 00:14:18.706 fused_ordering(383) 00:14:18.706 fused_ordering(384) 00:14:18.706 fused_ordering(385) 00:14:18.706 fused_ordering(386) 00:14:18.706 fused_ordering(387) 00:14:18.706 fused_ordering(388) 00:14:18.706 fused_ordering(389) 00:14:18.706 fused_ordering(390) 00:14:18.706 fused_ordering(391) 00:14:18.706 fused_ordering(392) 00:14:18.706 fused_ordering(393) 00:14:18.706 fused_ordering(394) 00:14:18.706 fused_ordering(395) 00:14:18.706 fused_ordering(396) 00:14:18.706 fused_ordering(397) 00:14:18.706 fused_ordering(398) 00:14:18.706 fused_ordering(399) 00:14:18.706 fused_ordering(400) 00:14:18.706 fused_ordering(401) 00:14:18.706 fused_ordering(402) 00:14:18.706 fused_ordering(403) 00:14:18.706 fused_ordering(404) 00:14:18.706 fused_ordering(405) 00:14:18.706 fused_ordering(406) 00:14:18.706 fused_ordering(407) 00:14:18.706 fused_ordering(408) 00:14:18.706 fused_ordering(409) 00:14:18.706 fused_ordering(410) 00:14:19.269 fused_ordering(411) 00:14:19.269 fused_ordering(412) 00:14:19.269 fused_ordering(413) 00:14:19.269 fused_ordering(414) 00:14:19.269 fused_ordering(415) 00:14:19.269 fused_ordering(416) 00:14:19.269 fused_ordering(417) 00:14:19.269 fused_ordering(418) 00:14:19.269 fused_ordering(419) 00:14:19.269 fused_ordering(420) 00:14:19.269 fused_ordering(421) 00:14:19.269 fused_ordering(422) 00:14:19.269 fused_ordering(423) 00:14:19.269 fused_ordering(424) 00:14:19.269 fused_ordering(425) 00:14:19.269 fused_ordering(426) 00:14:19.269 fused_ordering(427) 00:14:19.269 fused_ordering(428) 00:14:19.269 fused_ordering(429) 00:14:19.269 fused_ordering(430) 00:14:19.269 fused_ordering(431) 00:14:19.269 fused_ordering(432) 00:14:19.269 fused_ordering(433) 00:14:19.269 fused_ordering(434) 00:14:19.269 fused_ordering(435) 00:14:19.269 fused_ordering(436) 00:14:19.269 fused_ordering(437) 00:14:19.269 fused_ordering(438) 00:14:19.269 fused_ordering(439) 00:14:19.269 fused_ordering(440) 00:14:19.269 fused_ordering(441) 00:14:19.269 fused_ordering(442) 00:14:19.269 fused_ordering(443) 00:14:19.269 fused_ordering(444) 00:14:19.269 fused_ordering(445) 00:14:19.269 fused_ordering(446) 00:14:19.269 fused_ordering(447) 00:14:19.269 fused_ordering(448) 00:14:19.269 fused_ordering(449) 00:14:19.269 fused_ordering(450) 00:14:19.269 fused_ordering(451) 00:14:19.269 fused_ordering(452) 00:14:19.269 fused_ordering(453) 00:14:19.269 fused_ordering(454) 00:14:19.269 fused_ordering(455) 00:14:19.269 fused_ordering(456) 00:14:19.270 fused_ordering(457) 00:14:19.270 fused_ordering(458) 00:14:19.270 fused_ordering(459) 00:14:19.270 fused_ordering(460) 00:14:19.270 fused_ordering(461) 00:14:19.270 fused_ordering(462) 00:14:19.270 fused_ordering(463) 00:14:19.270 fused_ordering(464) 00:14:19.270 fused_ordering(465) 00:14:19.270 fused_ordering(466) 00:14:19.270 fused_ordering(467) 00:14:19.270 fused_ordering(468) 00:14:19.270 fused_ordering(469) 00:14:19.270 fused_ordering(470) 00:14:19.270 fused_ordering(471) 00:14:19.270 fused_ordering(472) 00:14:19.270 fused_ordering(473) 00:14:19.270 fused_ordering(474) 00:14:19.270 fused_ordering(475) 00:14:19.270 fused_ordering(476) 00:14:19.270 fused_ordering(477) 00:14:19.270 fused_ordering(478) 00:14:19.270 fused_ordering(479) 00:14:19.270 fused_ordering(480) 00:14:19.270 fused_ordering(481) 00:14:19.270 fused_ordering(482) 00:14:19.270 fused_ordering(483) 00:14:19.270 fused_ordering(484) 00:14:19.270 fused_ordering(485) 00:14:19.270 fused_ordering(486) 00:14:19.270 fused_ordering(487) 00:14:19.270 fused_ordering(488) 00:14:19.270 fused_ordering(489) 00:14:19.270 fused_ordering(490) 00:14:19.270 fused_ordering(491) 00:14:19.270 fused_ordering(492) 00:14:19.270 fused_ordering(493) 00:14:19.270 fused_ordering(494) 00:14:19.270 fused_ordering(495) 00:14:19.270 fused_ordering(496) 00:14:19.270 fused_ordering(497) 00:14:19.270 fused_ordering(498) 00:14:19.270 fused_ordering(499) 00:14:19.270 fused_ordering(500) 00:14:19.270 fused_ordering(501) 00:14:19.270 fused_ordering(502) 00:14:19.270 fused_ordering(503) 00:14:19.270 fused_ordering(504) 00:14:19.270 fused_ordering(505) 00:14:19.270 fused_ordering(506) 00:14:19.270 fused_ordering(507) 00:14:19.270 fused_ordering(508) 00:14:19.270 fused_ordering(509) 00:14:19.270 fused_ordering(510) 00:14:19.270 fused_ordering(511) 00:14:19.270 fused_ordering(512) 00:14:19.270 fused_ordering(513) 00:14:19.270 fused_ordering(514) 00:14:19.270 fused_ordering(515) 00:14:19.270 fused_ordering(516) 00:14:19.270 fused_ordering(517) 00:14:19.270 fused_ordering(518) 00:14:19.270 fused_ordering(519) 00:14:19.270 fused_ordering(520) 00:14:19.270 fused_ordering(521) 00:14:19.270 fused_ordering(522) 00:14:19.270 fused_ordering(523) 00:14:19.270 fused_ordering(524) 00:14:19.270 fused_ordering(525) 00:14:19.270 fused_ordering(526) 00:14:19.270 fused_ordering(527) 00:14:19.270 fused_ordering(528) 00:14:19.270 fused_ordering(529) 00:14:19.270 fused_ordering(530) 00:14:19.270 fused_ordering(531) 00:14:19.270 fused_ordering(532) 00:14:19.270 fused_ordering(533) 00:14:19.270 fused_ordering(534) 00:14:19.270 fused_ordering(535) 00:14:19.270 fused_ordering(536) 00:14:19.270 fused_ordering(537) 00:14:19.270 fused_ordering(538) 00:14:19.270 fused_ordering(539) 00:14:19.270 fused_ordering(540) 00:14:19.270 fused_ordering(541) 00:14:19.270 fused_ordering(542) 00:14:19.270 fused_ordering(543) 00:14:19.270 fused_ordering(544) 00:14:19.270 fused_ordering(545) 00:14:19.270 fused_ordering(546) 00:14:19.270 fused_ordering(547) 00:14:19.270 fused_ordering(548) 00:14:19.270 fused_ordering(549) 00:14:19.270 fused_ordering(550) 00:14:19.270 fused_ordering(551) 00:14:19.270 fused_ordering(552) 00:14:19.270 fused_ordering(553) 00:14:19.270 fused_ordering(554) 00:14:19.270 fused_ordering(555) 00:14:19.270 fused_ordering(556) 00:14:19.270 fused_ordering(557) 00:14:19.270 fused_ordering(558) 00:14:19.270 fused_ordering(559) 00:14:19.270 fused_ordering(560) 00:14:19.270 fused_ordering(561) 00:14:19.270 fused_ordering(562) 00:14:19.270 fused_ordering(563) 00:14:19.270 fused_ordering(564) 00:14:19.270 fused_ordering(565) 00:14:19.270 fused_ordering(566) 00:14:19.270 fused_ordering(567) 00:14:19.270 fused_ordering(568) 00:14:19.270 fused_ordering(569) 00:14:19.270 fused_ordering(570) 00:14:19.270 fused_ordering(571) 00:14:19.270 fused_ordering(572) 00:14:19.270 fused_ordering(573) 00:14:19.270 fused_ordering(574) 00:14:19.270 fused_ordering(575) 00:14:19.270 fused_ordering(576) 00:14:19.270 fused_ordering(577) 00:14:19.270 fused_ordering(578) 00:14:19.270 fused_ordering(579) 00:14:19.270 fused_ordering(580) 00:14:19.270 fused_ordering(581) 00:14:19.270 fused_ordering(582) 00:14:19.270 fused_ordering(583) 00:14:19.270 fused_ordering(584) 00:14:19.270 fused_ordering(585) 00:14:19.270 fused_ordering(586) 00:14:19.270 fused_ordering(587) 00:14:19.270 fused_ordering(588) 00:14:19.270 fused_ordering(589) 00:14:19.270 fused_ordering(590) 00:14:19.270 fused_ordering(591) 00:14:19.270 fused_ordering(592) 00:14:19.270 fused_ordering(593) 00:14:19.270 fused_ordering(594) 00:14:19.270 fused_ordering(595) 00:14:19.270 fused_ordering(596) 00:14:19.270 fused_ordering(597) 00:14:19.270 fused_ordering(598) 00:14:19.270 fused_ordering(599) 00:14:19.270 fused_ordering(600) 00:14:19.270 fused_ordering(601) 00:14:19.270 fused_ordering(602) 00:14:19.270 fused_ordering(603) 00:14:19.270 fused_ordering(604) 00:14:19.270 fused_ordering(605) 00:14:19.270 fused_ordering(606) 00:14:19.270 fused_ordering(607) 00:14:19.270 fused_ordering(608) 00:14:19.270 fused_ordering(609) 00:14:19.270 fused_ordering(610) 00:14:19.270 fused_ordering(611) 00:14:19.270 fused_ordering(612) 00:14:19.270 fused_ordering(613) 00:14:19.270 fused_ordering(614) 00:14:19.270 fused_ordering(615) 00:14:19.834 fused_ordering(616) 00:14:19.834 fused_ordering(617) 00:14:19.834 fused_ordering(618) 00:14:19.834 fused_ordering(619) 00:14:19.834 fused_ordering(620) 00:14:19.834 fused_ordering(621) 00:14:19.834 fused_ordering(622) 00:14:19.834 fused_ordering(623) 00:14:19.834 fused_ordering(624) 00:14:19.834 fused_ordering(625) 00:14:19.834 fused_ordering(626) 00:14:19.834 fused_ordering(627) 00:14:19.834 fused_ordering(628) 00:14:19.834 fused_ordering(629) 00:14:19.834 fused_ordering(630) 00:14:19.834 fused_ordering(631) 00:14:19.834 fused_ordering(632) 00:14:19.834 fused_ordering(633) 00:14:19.834 fused_ordering(634) 00:14:19.834 fused_ordering(635) 00:14:19.834 fused_ordering(636) 00:14:19.834 fused_ordering(637) 00:14:19.834 fused_ordering(638) 00:14:19.834 fused_ordering(639) 00:14:19.834 fused_ordering(640) 00:14:19.834 fused_ordering(641) 00:14:19.834 fused_ordering(642) 00:14:19.834 fused_ordering(643) 00:14:19.834 fused_ordering(644) 00:14:19.834 fused_ordering(645) 00:14:19.834 fused_ordering(646) 00:14:19.834 fused_ordering(647) 00:14:19.834 fused_ordering(648) 00:14:19.834 fused_ordering(649) 00:14:19.834 fused_ordering(650) 00:14:19.834 fused_ordering(651) 00:14:19.834 fused_ordering(652) 00:14:19.834 fused_ordering(653) 00:14:19.834 fused_ordering(654) 00:14:19.834 fused_ordering(655) 00:14:19.834 fused_ordering(656) 00:14:19.834 fused_ordering(657) 00:14:19.834 fused_ordering(658) 00:14:19.834 fused_ordering(659) 00:14:19.834 fused_ordering(660) 00:14:19.834 fused_ordering(661) 00:14:19.834 fused_ordering(662) 00:14:19.834 fused_ordering(663) 00:14:19.834 fused_ordering(664) 00:14:19.834 fused_ordering(665) 00:14:19.834 fused_ordering(666) 00:14:19.834 fused_ordering(667) 00:14:19.834 fused_ordering(668) 00:14:19.834 fused_ordering(669) 00:14:19.834 fused_ordering(670) 00:14:19.834 fused_ordering(671) 00:14:19.834 fused_ordering(672) 00:14:19.834 fused_ordering(673) 00:14:19.834 fused_ordering(674) 00:14:19.834 fused_ordering(675) 00:14:19.834 fused_ordering(676) 00:14:19.834 fused_ordering(677) 00:14:19.834 fused_ordering(678) 00:14:19.834 fused_ordering(679) 00:14:19.834 fused_ordering(680) 00:14:19.834 fused_ordering(681) 00:14:19.834 fused_ordering(682) 00:14:19.834 fused_ordering(683) 00:14:19.834 fused_ordering(684) 00:14:19.834 fused_ordering(685) 00:14:19.834 fused_ordering(686) 00:14:19.834 fused_ordering(687) 00:14:19.834 fused_ordering(688) 00:14:19.834 fused_ordering(689) 00:14:19.834 fused_ordering(690) 00:14:19.834 fused_ordering(691) 00:14:19.834 fused_ordering(692) 00:14:19.834 fused_ordering(693) 00:14:19.834 fused_ordering(694) 00:14:19.834 fused_ordering(695) 00:14:19.834 fused_ordering(696) 00:14:19.834 fused_ordering(697) 00:14:19.834 fused_ordering(698) 00:14:19.834 fused_ordering(699) 00:14:19.834 fused_ordering(700) 00:14:19.834 fused_ordering(701) 00:14:19.834 fused_ordering(702) 00:14:19.834 fused_ordering(703) 00:14:19.834 fused_ordering(704) 00:14:19.834 fused_ordering(705) 00:14:19.834 fused_ordering(706) 00:14:19.834 fused_ordering(707) 00:14:19.834 fused_ordering(708) 00:14:19.834 fused_ordering(709) 00:14:19.834 fused_ordering(710) 00:14:19.834 fused_ordering(711) 00:14:19.834 fused_ordering(712) 00:14:19.834 fused_ordering(713) 00:14:19.834 fused_ordering(714) 00:14:19.834 fused_ordering(715) 00:14:19.834 fused_ordering(716) 00:14:19.834 fused_ordering(717) 00:14:19.834 fused_ordering(718) 00:14:19.834 fused_ordering(719) 00:14:19.834 fused_ordering(720) 00:14:19.834 fused_ordering(721) 00:14:19.834 fused_ordering(722) 00:14:19.834 fused_ordering(723) 00:14:19.834 fused_ordering(724) 00:14:19.834 fused_ordering(725) 00:14:19.834 fused_ordering(726) 00:14:19.834 fused_ordering(727) 00:14:19.834 fused_ordering(728) 00:14:19.834 fused_ordering(729) 00:14:19.834 fused_ordering(730) 00:14:19.834 fused_ordering(731) 00:14:19.834 fused_ordering(732) 00:14:19.834 fused_ordering(733) 00:14:19.834 fused_ordering(734) 00:14:19.834 fused_ordering(735) 00:14:19.834 fused_ordering(736) 00:14:19.834 fused_ordering(737) 00:14:19.834 fused_ordering(738) 00:14:19.834 fused_ordering(739) 00:14:19.834 fused_ordering(740) 00:14:19.834 fused_ordering(741) 00:14:19.834 fused_ordering(742) 00:14:19.834 fused_ordering(743) 00:14:19.834 fused_ordering(744) 00:14:19.834 fused_ordering(745) 00:14:19.834 fused_ordering(746) 00:14:19.834 fused_ordering(747) 00:14:19.834 fused_ordering(748) 00:14:19.834 fused_ordering(749) 00:14:19.834 fused_ordering(750) 00:14:19.834 fused_ordering(751) 00:14:19.834 fused_ordering(752) 00:14:19.834 fused_ordering(753) 00:14:19.834 fused_ordering(754) 00:14:19.834 fused_ordering(755) 00:14:19.834 fused_ordering(756) 00:14:19.834 fused_ordering(757) 00:14:19.834 fused_ordering(758) 00:14:19.834 fused_ordering(759) 00:14:19.834 fused_ordering(760) 00:14:19.834 fused_ordering(761) 00:14:19.834 fused_ordering(762) 00:14:19.834 fused_ordering(763) 00:14:19.834 fused_ordering(764) 00:14:19.834 fused_ordering(765) 00:14:19.834 fused_ordering(766) 00:14:19.834 fused_ordering(767) 00:14:19.834 fused_ordering(768) 00:14:19.834 fused_ordering(769) 00:14:19.834 fused_ordering(770) 00:14:19.834 fused_ordering(771) 00:14:19.834 fused_ordering(772) 00:14:19.834 fused_ordering(773) 00:14:19.834 fused_ordering(774) 00:14:19.834 fused_ordering(775) 00:14:19.834 fused_ordering(776) 00:14:19.834 fused_ordering(777) 00:14:19.834 fused_ordering(778) 00:14:19.834 fused_ordering(779) 00:14:19.834 fused_ordering(780) 00:14:19.834 fused_ordering(781) 00:14:19.834 fused_ordering(782) 00:14:19.834 fused_ordering(783) 00:14:19.834 fused_ordering(784) 00:14:19.834 fused_ordering(785) 00:14:19.834 fused_ordering(786) 00:14:19.834 fused_ordering(787) 00:14:19.834 fused_ordering(788) 00:14:19.834 fused_ordering(789) 00:14:19.834 fused_ordering(790) 00:14:19.834 fused_ordering(791) 00:14:19.834 fused_ordering(792) 00:14:19.834 fused_ordering(793) 00:14:19.834 fused_ordering(794) 00:14:19.834 fused_ordering(795) 00:14:19.834 fused_ordering(796) 00:14:19.834 fused_ordering(797) 00:14:19.834 fused_ordering(798) 00:14:19.834 fused_ordering(799) 00:14:19.834 fused_ordering(800) 00:14:19.834 fused_ordering(801) 00:14:19.834 fused_ordering(802) 00:14:19.834 fused_ordering(803) 00:14:19.834 fused_ordering(804) 00:14:19.834 fused_ordering(805) 00:14:19.834 fused_ordering(806) 00:14:19.834 fused_ordering(807) 00:14:19.834 fused_ordering(808) 00:14:19.834 fused_ordering(809) 00:14:19.834 fused_ordering(810) 00:14:19.834 fused_ordering(811) 00:14:19.834 fused_ordering(812) 00:14:19.834 fused_ordering(813) 00:14:19.834 fused_ordering(814) 00:14:19.834 fused_ordering(815) 00:14:19.834 fused_ordering(816) 00:14:19.834 fused_ordering(817) 00:14:19.834 fused_ordering(818) 00:14:19.834 fused_ordering(819) 00:14:19.834 fused_ordering(820) 00:14:20.765 fused_ordering(821) 00:14:20.765 fused_ordering(822) 00:14:20.766 fused_ordering(823) 00:14:20.766 fused_ordering(824) 00:14:20.766 fused_ordering(825) 00:14:20.766 fused_ordering(826) 00:14:20.766 fused_ordering(827) 00:14:20.766 fused_ordering(828) 00:14:20.766 fused_ordering(829) 00:14:20.766 fused_ordering(830) 00:14:20.766 fused_ordering(831) 00:14:20.766 fused_ordering(832) 00:14:20.766 fused_ordering(833) 00:14:20.766 fused_ordering(834) 00:14:20.766 fused_ordering(835) 00:14:20.766 fused_ordering(836) 00:14:20.766 fused_ordering(837) 00:14:20.766 fused_ordering(838) 00:14:20.766 fused_ordering(839) 00:14:20.766 fused_ordering(840) 00:14:20.766 fused_ordering(841) 00:14:20.766 fused_ordering(842) 00:14:20.766 fused_ordering(843) 00:14:20.766 fused_ordering(844) 00:14:20.766 fused_ordering(845) 00:14:20.766 fused_ordering(846) 00:14:20.766 fused_ordering(847) 00:14:20.766 fused_ordering(848) 00:14:20.766 fused_ordering(849) 00:14:20.766 fused_ordering(850) 00:14:20.766 fused_ordering(851) 00:14:20.766 fused_ordering(852) 00:14:20.766 fused_ordering(853) 00:14:20.766 fused_ordering(854) 00:14:20.766 fused_ordering(855) 00:14:20.766 fused_ordering(856) 00:14:20.766 fused_ordering(857) 00:14:20.766 fused_ordering(858) 00:14:20.766 fused_ordering(859) 00:14:20.766 fused_ordering(860) 00:14:20.766 fused_ordering(861) 00:14:20.766 fused_ordering(862) 00:14:20.766 fused_ordering(863) 00:14:20.766 fused_ordering(864) 00:14:20.766 fused_ordering(865) 00:14:20.766 fused_ordering(866) 00:14:20.766 fused_ordering(867) 00:14:20.766 fused_ordering(868) 00:14:20.766 fused_ordering(869) 00:14:20.766 fused_ordering(870) 00:14:20.766 fused_ordering(871) 00:14:20.766 fused_ordering(872) 00:14:20.766 fused_ordering(873) 00:14:20.766 fused_ordering(874) 00:14:20.766 fused_ordering(875) 00:14:20.766 fused_ordering(876) 00:14:20.766 fused_ordering(877) 00:14:20.766 fused_ordering(878) 00:14:20.766 fused_ordering(879) 00:14:20.766 fused_ordering(880) 00:14:20.766 fused_ordering(881) 00:14:20.766 fused_ordering(882) 00:14:20.766 fused_ordering(883) 00:14:20.766 fused_ordering(884) 00:14:20.766 fused_ordering(885) 00:14:20.766 fused_ordering(886) 00:14:20.766 fused_ordering(887) 00:14:20.766 fused_ordering(888) 00:14:20.766 fused_ordering(889) 00:14:20.766 fused_ordering(890) 00:14:20.766 fused_ordering(891) 00:14:20.766 fused_ordering(892) 00:14:20.766 fused_ordering(893) 00:14:20.766 fused_ordering(894) 00:14:20.766 fused_ordering(895) 00:14:20.766 fused_ordering(896) 00:14:20.766 fused_ordering(897) 00:14:20.766 fused_ordering(898) 00:14:20.766 fused_ordering(899) 00:14:20.766 fused_ordering(900) 00:14:20.766 fused_ordering(901) 00:14:20.766 fused_ordering(902) 00:14:20.766 fused_ordering(903) 00:14:20.766 fused_ordering(904) 00:14:20.766 fused_ordering(905) 00:14:20.766 fused_ordering(906) 00:14:20.766 fused_ordering(907) 00:14:20.766 fused_ordering(908) 00:14:20.766 fused_ordering(909) 00:14:20.766 fused_ordering(910) 00:14:20.766 fused_ordering(911) 00:14:20.766 fused_ordering(912) 00:14:20.766 fused_ordering(913) 00:14:20.766 fused_ordering(914) 00:14:20.766 fused_ordering(915) 00:14:20.766 fused_ordering(916) 00:14:20.766 fused_ordering(917) 00:14:20.766 fused_ordering(918) 00:14:20.766 fused_ordering(919) 00:14:20.766 fused_ordering(920) 00:14:20.766 fused_ordering(921) 00:14:20.766 fused_ordering(922) 00:14:20.766 fused_ordering(923) 00:14:20.766 fused_ordering(924) 00:14:20.766 fused_ordering(925) 00:14:20.766 fused_ordering(926) 00:14:20.766 fused_ordering(927) 00:14:20.766 fused_ordering(928) 00:14:20.766 fused_ordering(929) 00:14:20.766 fused_ordering(930) 00:14:20.766 fused_ordering(931) 00:14:20.766 fused_ordering(932) 00:14:20.766 fused_ordering(933) 00:14:20.766 fused_ordering(934) 00:14:20.766 fused_ordering(935) 00:14:20.766 fused_ordering(936) 00:14:20.766 fused_ordering(937) 00:14:20.766 fused_ordering(938) 00:14:20.766 fused_ordering(939) 00:14:20.766 fused_ordering(940) 00:14:20.766 fused_ordering(941) 00:14:20.766 fused_ordering(942) 00:14:20.766 fused_ordering(943) 00:14:20.766 fused_ordering(944) 00:14:20.766 fused_ordering(945) 00:14:20.766 fused_ordering(946) 00:14:20.766 fused_ordering(947) 00:14:20.766 fused_ordering(948) 00:14:20.766 fused_ordering(949) 00:14:20.766 fused_ordering(950) 00:14:20.766 fused_ordering(951) 00:14:20.766 fused_ordering(952) 00:14:20.766 fused_ordering(953) 00:14:20.766 fused_ordering(954) 00:14:20.766 fused_ordering(955) 00:14:20.766 fused_ordering(956) 00:14:20.766 fused_ordering(957) 00:14:20.766 fused_ordering(958) 00:14:20.766 fused_ordering(959) 00:14:20.766 fused_ordering(960) 00:14:20.766 fused_ordering(961) 00:14:20.766 fused_ordering(962) 00:14:20.766 fused_ordering(963) 00:14:20.766 fused_ordering(964) 00:14:20.766 fused_ordering(965) 00:14:20.766 fused_ordering(966) 00:14:20.766 fused_ordering(967) 00:14:20.766 fused_ordering(968) 00:14:20.766 fused_ordering(969) 00:14:20.766 fused_ordering(970) 00:14:20.766 fused_ordering(971) 00:14:20.766 fused_ordering(972) 00:14:20.766 fused_ordering(973) 00:14:20.766 fused_ordering(974) 00:14:20.766 fused_ordering(975) 00:14:20.766 fused_ordering(976) 00:14:20.766 fused_ordering(977) 00:14:20.766 fused_ordering(978) 00:14:20.766 fused_ordering(979) 00:14:20.766 fused_ordering(980) 00:14:20.766 fused_ordering(981) 00:14:20.766 fused_ordering(982) 00:14:20.766 fused_ordering(983) 00:14:20.766 fused_ordering(984) 00:14:20.766 fused_ordering(985) 00:14:20.766 fused_ordering(986) 00:14:20.766 fused_ordering(987) 00:14:20.766 fused_ordering(988) 00:14:20.766 fused_ordering(989) 00:14:20.766 fused_ordering(990) 00:14:20.766 fused_ordering(991) 00:14:20.766 fused_ordering(992) 00:14:20.766 fused_ordering(993) 00:14:20.766 fused_ordering(994) 00:14:20.766 fused_ordering(995) 00:14:20.766 fused_ordering(996) 00:14:20.766 fused_ordering(997) 00:14:20.766 fused_ordering(998) 00:14:20.766 fused_ordering(999) 00:14:20.766 fused_ordering(1000) 00:14:20.766 fused_ordering(1001) 00:14:20.766 fused_ordering(1002) 00:14:20.766 fused_ordering(1003) 00:14:20.766 fused_ordering(1004) 00:14:20.766 fused_ordering(1005) 00:14:20.766 fused_ordering(1006) 00:14:20.766 fused_ordering(1007) 00:14:20.766 fused_ordering(1008) 00:14:20.766 fused_ordering(1009) 00:14:20.766 fused_ordering(1010) 00:14:20.766 fused_ordering(1011) 00:14:20.766 fused_ordering(1012) 00:14:20.766 fused_ordering(1013) 00:14:20.766 fused_ordering(1014) 00:14:20.766 fused_ordering(1015) 00:14:20.766 fused_ordering(1016) 00:14:20.766 fused_ordering(1017) 00:14:20.766 fused_ordering(1018) 00:14:20.766 fused_ordering(1019) 00:14:20.766 fused_ordering(1020) 00:14:20.766 fused_ordering(1021) 00:14:20.766 fused_ordering(1022) 00:14:20.766 fused_ordering(1023) 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.766 rmmod nvme_tcp 00:14:20.766 rmmod nvme_fabrics 00:14:20.766 rmmod nvme_keyring 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3547674 ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3547674 ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3547674' 00:14:20.766 killing process with pid 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3547674 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.766 18:46:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.298 18:46:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.298 00:14:23.298 real 0m7.498s 00:14:23.298 user 0m5.151s 00:14:23.298 sys 0m3.220s 00:14:23.298 18:46:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.298 18:46:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 ************************************ 00:14:23.298 END TEST nvmf_fused_ordering 00:14:23.298 ************************************ 00:14:23.298 18:46:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:23.298 18:46:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:23.298 18:46:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.298 18:46:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.298 18:46:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 ************************************ 00:14:23.298 START TEST nvmf_delete_subsystem 00:14:23.298 ************************************ 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:23.298 * Looking for test storage... 00:14:23.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.298 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.299 18:46:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:24.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:24.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:24.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.732 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:24.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.733 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.991 18:46:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:14:24.991 00:14:24.991 --- 10.0.0.2 ping statistics --- 00:14:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.991 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:14:24.991 00:14:24.991 --- 10.0.0.1 ping statistics --- 00:14:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.991 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.991 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3549900 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3549900 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3549900 ']' 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.992 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:24.992 [2024-07-14 18:46:13.143688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:24.992 [2024-07-14 18:46:13.143770] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.992 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.992 [2024-07-14 18:46:13.207346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:25.250 [2024-07-14 18:46:13.296175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.250 [2024-07-14 18:46:13.296235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.250 [2024-07-14 18:46:13.296251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.250 [2024-07-14 18:46:13.296264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.250 [2024-07-14 18:46:13.296276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.250 [2024-07-14 18:46:13.296357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.250 [2024-07-14 18:46:13.296376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.250 [2024-07-14 18:46:13.439420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.250 [2024-07-14 18:46:13.455713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.250 NULL1 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.250 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 Delay0 00:14:25.507 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.507 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.507 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.507 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.507 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.508 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3550043 00:14:25.508 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:25.508 18:46:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:25.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.508 [2024-07-14 18:46:13.530304] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:27.402 18:46:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.402 18:46:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.402 18:46:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 [2024-07-14 18:46:15.821169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fac000c00 is same with the state(5) to be set 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 starting I/O failed: -6 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Read completed with error (sct=0, sc=8) 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.660 [2024-07-14 18:46:15.822381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fac00d2f0 is same with the state(5) to be set 00:14:27.660 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Write completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 Read completed with error (sct=0, sc=8) 00:14:27.661 starting I/O failed: -6 00:14:27.661 starting I/O failed: -6 00:14:27.661 starting I/O failed: -6 00:14:27.661 starting I/O failed: -6 00:14:27.661 starting I/O failed: -6 00:14:27.661 starting I/O failed: -6 00:14:28.593 [2024-07-14 18:46:16.792330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1537630 is same with the state(5) to be set 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Write completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.850 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 [2024-07-14 18:46:16.824255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151fd40 is same with the state(5) to be set 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 [2024-07-14 18:46:16.824534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151aec0 is same with the state(5) to be set 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 [2024-07-14 18:46:16.825466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fac00cfe0 is same with the state(5) to be set 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Write completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 Read completed with error (sct=0, sc=8) 00:14:28.851 [2024-07-14 18:46:16.825665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0fac00d600 is same with the state(5) to be set 00:14:28.851 Initializing NVMe Controllers 00:14:28.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:28.851 Controller IO queue size 128, less than required. 00:14:28.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:28.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:28.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:28.851 Initialization complete. Launching workers. 00:14:28.851 ======================================================== 00:14:28.851 Latency(us) 00:14:28.851 Device Information : IOPS MiB/s Average min max 00:14:28.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.59 0.09 901729.20 599.08 1012327.32 00:14:28.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.28 0.08 913970.07 694.98 1012263.17 00:14:28.851 ======================================================== 00:14:28.851 Total : 349.87 0.17 907406.88 599.08 1012327.32 00:14:28.851 00:14:28.851 [2024-07-14 18:46:16.826456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1537630 (9): Bad file descriptor 00:14:28.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:28.851 18:46:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.851 18:46:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:28.851 18:46:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3550043 00:14:28.851 18:46:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3550043 00:14:29.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3550043) - No such process 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3550043 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3550043 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3550043 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.109 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.367 [2024-07-14 18:46:17.344307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3550449 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:29.367 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.367 [2024-07-14 18:46:17.401937] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:29.932 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.932 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:29.932 18:46:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.189 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.189 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:30.189 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.754 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.754 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:30.754 18:46:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.319 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.319 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:31.319 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:31.885 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:31.885 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:31.885 18:46:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.452 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:32.452 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:32.452 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.452 Initializing NVMe Controllers 00:14:32.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.452 Controller IO queue size 128, less than required. 00:14:32.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:32.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:32.452 Initialization complete. Launching workers. 00:14:32.452 ======================================================== 00:14:32.452 Latency(us) 00:14:32.452 Device Information : IOPS MiB/s Average min max 00:14:32.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004500.37 1000195.36 1042314.38 00:14:32.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004308.84 1000206.91 1011040.29 00:14:32.452 ======================================================== 00:14:32.452 Total : 256.00 0.12 1004404.61 1000195.36 1042314.38 00:14:32.452 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3550449 00:14:32.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3550449) - No such process 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3550449 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.710 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.710 rmmod nvme_tcp 00:14:32.710 rmmod nvme_fabrics 00:14:32.710 rmmod nvme_keyring 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3549900 ']' 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3549900 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3549900 ']' 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3549900 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3549900 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3549900' 00:14:32.967 killing process with pid 3549900 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3549900 00:14:32.967 18:46:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3549900 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.224 18:46:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.126 18:46:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.126 00:14:35.126 real 0m12.221s 00:14:35.126 user 0m27.891s 00:14:35.126 sys 0m2.953s 00:14:35.126 18:46:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.126 18:46:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:35.126 ************************************ 00:14:35.126 END TEST nvmf_delete_subsystem 00:14:35.126 ************************************ 00:14:35.126 18:46:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.126 18:46:23 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:35.126 18:46:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.126 18:46:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.126 18:46:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.126 ************************************ 00:14:35.126 START TEST nvmf_ns_masking 00:14:35.126 ************************************ 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:35.126 * Looking for test storage... 00:14:35.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.126 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f3075905-5179-4979-9ce0-d45a71372de9 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b2f317ef-b0da-4987-a5e3-26e62835444d 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1cefcded-0487-4e33-9fd1-b1ec74748fa7 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.385 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.386 18:46:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.386 18:46:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:37.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:37.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:37.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.289 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:37.290 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:14:37.290 00:14:37.290 --- 10.0.0.2 ping statistics --- 00:14:37.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.290 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:37.290 00:14:37.290 --- 10.0.0.1 ping statistics --- 00:14:37.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.290 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.290 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3552796 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3552796 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3552796 ']' 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.548 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.548 [2024-07-14 18:46:25.579846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:37.548 [2024-07-14 18:46:25.579946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.548 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.548 [2024-07-14 18:46:25.648610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.548 [2024-07-14 18:46:25.736788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.548 [2024-07-14 18:46:25.736851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.548 [2024-07-14 18:46:25.736866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.548 [2024-07-14 18:46:25.736888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.548 [2024-07-14 18:46:25.736901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.548 [2024-07-14 18:46:25.736934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.805 18:46:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:38.063 [2024-07-14 18:46:26.093609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.063 18:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:38.063 18:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:38.063 18:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.321 Malloc1 00:14:38.321 18:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.578 Malloc2 00:14:38.578 18:46:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:38.836 18:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:39.093 18:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.351 [2024-07-14 18:46:27.558154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1cefcded-0487-4e33-9fd1-b1ec74748fa7 -a 10.0.0.2 -s 4420 -i 4 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:39.609 18:46:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.133 [ 0]:0x1 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d807b69220c45978c0a655a40baac6b 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d807b69220c45978c0a655a40baac6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.133 18:46:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.133 [ 0]:0x1 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d807b69220c45978c0a655a40baac6b 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d807b69220c45978c0a655a40baac6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:42.133 [ 1]:0x2 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:42.133 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.398 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.987 18:46:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:42.987 18:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:42.987 18:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1cefcded-0487-4e33-9fd1-b1ec74748fa7 -a 10.0.0.2 -s 4420 -i 4 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:43.244 18:46:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:45.140 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.396 [ 0]:0x2 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.396 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.653 [ 0]:0x1 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d807b69220c45978c0a655a40baac6b 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d807b69220c45978c0a655a40baac6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.653 [ 1]:0x2 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.653 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.910 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:45.910 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.911 18:46:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:46.168 [ 0]:0x2 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.168 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.425 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:46.425 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1cefcded-0487-4e33-9fd1-b1ec74748fa7 -a 10.0.0.2 -s 4420 -i 4 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:46.682 18:46:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.207 [ 0]:0x1 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d807b69220c45978c0a655a40baac6b 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d807b69220c45978c0a655a40baac6b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.207 [ 1]:0x2 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.207 18:46:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.207 [ 0]:0x2 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:49.207 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:49.465 [2024-07-14 18:46:37.628972] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:49.465 request: 00:14:49.465 { 00:14:49.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.465 "nsid": 2, 00:14:49.465 "host": "nqn.2016-06.io.spdk:host1", 00:14:49.465 "method": "nvmf_ns_remove_host", 00:14:49.465 "req_id": 1 00:14:49.465 } 00:14:49.465 Got JSON-RPC error response 00:14:49.465 response: 00:14:49.465 { 00:14:49.465 "code": -32602, 00:14:49.465 "message": "Invalid parameters" 00:14:49.465 } 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.465 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.724 [ 0]:0x2 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c07b2540b3a4460b3e3f2e8bf1eb2a7 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c07b2540b3a4460b3e3f2e8bf1eb2a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3554417 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3554417 /var/tmp/host.sock 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3554417 ']' 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:49.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.724 18:46:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.724 [2024-07-14 18:46:37.827236] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:49.724 [2024-07-14 18:46:37.827319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554417 ] 00:14:49.724 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.724 [2024-07-14 18:46:37.889523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.982 [2024-07-14 18:46:37.981531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.240 18:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.240 18:46:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:50.240 18:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.498 18:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:50.755 18:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f3075905-5179-4979-9ce0-d45a71372de9 00:14:50.755 18:46:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:50.755 18:46:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F3075905517949799CE0D45A71372DE9 -i 00:14:51.013 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b2f317ef-b0da-4987-a5e3-26e62835444d 00:14:51.013 18:46:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:51.013 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B2F317EFB0DA4987A5E326E62835444D -i 00:14:51.270 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.527 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:51.785 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:51.785 18:46:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:52.351 nvme0n1 00:14:52.351 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:52.351 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:52.609 nvme1n2 00:14:52.609 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:52.609 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:52.609 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:52.609 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:52.609 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:52.866 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:52.866 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:52.866 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:52.866 18:46:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:53.124 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f3075905-5179-4979-9ce0-d45a71372de9 == \f\3\0\7\5\9\0\5\-\5\1\7\9\-\4\9\7\9\-\9\c\e\0\-\d\4\5\a\7\1\3\7\2\d\e\9 ]] 00:14:53.124 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:53.124 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:53.124 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b2f317ef-b0da-4987-a5e3-26e62835444d == \b\2\f\3\1\7\e\f\-\b\0\d\a\-\4\9\8\7\-\a\5\e\3\-\2\6\e\6\2\8\3\5\4\4\4\d ]] 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3554417 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3554417 ']' 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3554417 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3554417 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3554417' 00:14:53.382 killing process with pid 3554417 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3554417 00:14:53.382 18:46:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3554417 00:14:53.640 18:46:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.898 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.898 rmmod nvme_tcp 00:14:53.898 rmmod nvme_fabrics 00:14:54.156 rmmod nvme_keyring 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3552796 ']' 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3552796 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3552796 ']' 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3552796 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3552796 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3552796' 00:14:54.156 killing process with pid 3552796 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3552796 00:14:54.156 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3552796 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.415 18:46:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.316 18:46:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.316 00:14:56.316 real 0m21.214s 00:14:56.316 user 0m27.732s 00:14:56.316 sys 0m4.070s 00:14:56.316 18:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.316 18:46:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:56.316 ************************************ 00:14:56.316 END TEST nvmf_ns_masking 00:14:56.316 ************************************ 00:14:56.316 18:46:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:56.316 18:46:44 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:56.316 18:46:44 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:56.316 18:46:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.316 18:46:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.316 18:46:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.574 ************************************ 00:14:56.574 START TEST nvmf_nvme_cli 00:14:56.574 ************************************ 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:56.574 * Looking for test storage... 00:14:56.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.574 18:46:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:58.472 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:58.472 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:58.472 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:58.472 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.472 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:14:58.472 00:14:58.473 --- 10.0.0.2 ping statistics --- 00:14:58.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.473 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:58.473 00:14:58.473 --- 10.0.0.1 ping statistics --- 00:14:58.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.473 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3556901 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3556901 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3556901 ']' 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.473 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.473 [2024-07-14 18:46:46.684933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:58.473 [2024-07-14 18:46:46.685022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.730 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.730 [2024-07-14 18:46:46.754508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.730 [2024-07-14 18:46:46.849656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.730 [2024-07-14 18:46:46.849719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.730 [2024-07-14 18:46:46.849735] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.730 [2024-07-14 18:46:46.849748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.730 [2024-07-14 18:46:46.849759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.730 [2024-07-14 18:46:46.850008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.730 [2024-07-14 18:46:46.850050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.730 [2024-07-14 18:46:46.850100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.730 [2024-07-14 18:46:46.850103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 [2024-07-14 18:46:47.001931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 Malloc0 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 Malloc1 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.988 [2024-07-14 18:46:47.087002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.988 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.989 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.989 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.989 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.989 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.989 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:59.246 00:14:59.246 Discovery Log Number of Records 2, Generation counter 2 00:14:59.246 =====Discovery Log Entry 0====== 00:14:59.246 trtype: tcp 00:14:59.246 adrfam: ipv4 00:14:59.246 subtype: current discovery subsystem 00:14:59.246 treq: not required 00:14:59.246 portid: 0 00:14:59.246 trsvcid: 4420 00:14:59.246 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:59.246 traddr: 10.0.0.2 00:14:59.246 eflags: explicit discovery connections, duplicate discovery information 00:14:59.246 sectype: none 00:14:59.247 =====Discovery Log Entry 1====== 00:14:59.247 trtype: tcp 00:14:59.247 adrfam: ipv4 00:14:59.247 subtype: nvme subsystem 00:14:59.247 treq: not required 00:14:59.247 portid: 0 00:14:59.247 trsvcid: 4420 00:14:59.247 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:59.247 traddr: 10.0.0.2 00:14:59.247 eflags: none 00:14:59.247 sectype: none 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:59.247 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:59.812 18:46:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:01.712 /dev/nvme0n1 ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:01.712 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.970 18:46:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.970 rmmod nvme_tcp 00:15:01.970 rmmod nvme_fabrics 00:15:01.970 rmmod nvme_keyring 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3556901 ']' 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3556901 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3556901 ']' 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3556901 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3556901 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3556901' 00:15:01.970 killing process with pid 3556901 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3556901 00:15:01.970 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3556901 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.229 18:46:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.762 18:46:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.762 00:15:04.762 real 0m7.862s 00:15:04.762 user 0m14.423s 00:15:04.762 sys 0m2.073s 00:15:04.762 18:46:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.762 18:46:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:04.762 ************************************ 00:15:04.762 END TEST nvmf_nvme_cli 00:15:04.762 ************************************ 00:15:04.762 18:46:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:04.762 18:46:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:04.762 18:46:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.762 18:46:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.762 18:46:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.762 18:46:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.762 ************************************ 00:15:04.762 START TEST nvmf_vfio_user 00:15:04.762 ************************************ 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:04.762 * Looking for test storage... 00:15:04.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:04.762 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3557703 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3557703' 00:15:04.763 Process pid: 3557703 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3557703 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3557703 ']' 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:04.763 [2024-07-14 18:46:52.588340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:04.763 [2024-07-14 18:46:52.588437] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.763 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.763 [2024-07-14 18:46:52.646655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.763 [2024-07-14 18:46:52.730795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.763 [2024-07-14 18:46:52.730848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.763 [2024-07-14 18:46:52.730882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.763 [2024-07-14 18:46:52.730895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.763 [2024-07-14 18:46:52.730905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.763 [2024-07-14 18:46:52.730970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.763 [2024-07-14 18:46:52.731034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.763 [2024-07-14 18:46:52.731084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.763 [2024-07-14 18:46:52.731087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:04.763 18:46:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:05.713 18:46:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:05.970 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:05.970 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:05.970 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.970 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:05.970 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:06.227 Malloc1 00:15:06.227 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:06.484 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:06.741 18:46:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:06.998 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.998 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:06.998 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.256 Malloc2 00:15:07.256 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:07.513 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:07.770 18:46:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:08.029 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:08.029 [2024-07-14 18:46:56.157189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:08.029 [2024-07-14 18:46:56.157234] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558124 ] 00:15:08.029 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.029 [2024-07-14 18:46:56.190995] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:08.029 [2024-07-14 18:46:56.196326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.029 [2024-07-14 18:46:56.196359] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbc79206000 00:15:08.029 [2024-07-14 18:46:56.197321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.198312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.199318] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.200322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.201327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.202336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.203342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.204347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.029 [2024-07-14 18:46:56.208888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.029 [2024-07-14 18:46:56.208909] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbc77fba000 00:15:08.029 [2024-07-14 18:46:56.210028] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.029 [2024-07-14 18:46:56.225089] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:08.029 [2024-07-14 18:46:56.225124] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:08.029 [2024-07-14 18:46:56.227484] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:08.029 [2024-07-14 18:46:56.227538] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:08.029 [2024-07-14 18:46:56.227626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:08.029 [2024-07-14 18:46:56.227652] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:08.029 [2024-07-14 18:46:56.227663] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:08.029 [2024-07-14 18:46:56.228476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:08.029 [2024-07-14 18:46:56.228495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:08.029 [2024-07-14 18:46:56.228507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:08.029 [2024-07-14 18:46:56.229477] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:08.029 [2024-07-14 18:46:56.229495] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:08.029 [2024-07-14 18:46:56.229508] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.230480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:08.029 [2024-07-14 18:46:56.230498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.231487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:08.029 [2024-07-14 18:46:56.231504] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:08.029 [2024-07-14 18:46:56.231513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.231524] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.231637] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:08.029 [2024-07-14 18:46:56.231646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.231654] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:08.029 [2024-07-14 18:46:56.232494] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:08.029 [2024-07-14 18:46:56.233500] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:08.029 [2024-07-14 18:46:56.234505] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:08.029 [2024-07-14 18:46:56.235500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.029 [2024-07-14 18:46:56.235608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.029 [2024-07-14 18:46:56.236517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:08.029 [2024-07-14 18:46:56.236536] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.029 [2024-07-14 18:46:56.236545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236570] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:08.030 [2024-07-14 18:46:56.236583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236606] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.030 [2024-07-14 18:46:56.236616] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.030 [2024-07-14 18:46:56.236634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.236687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.236703] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:08.030 [2024-07-14 18:46:56.236715] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:08.030 [2024-07-14 18:46:56.236723] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:08.030 [2024-07-14 18:46:56.236731] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:08.030 [2024-07-14 18:46:56.236738] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:08.030 [2024-07-14 18:46:56.236746] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:08.030 [2024-07-14 18:46:56.236754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.236796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.236817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.030 [2024-07-14 18:46:56.236831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.030 [2024-07-14 18:46:56.236843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.030 [2024-07-14 18:46:56.236854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.030 [2024-07-14 18:46:56.236888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.236934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.236944] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:08.030 [2024-07-14 18:46:56.236953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.236987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.236999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237090] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:08.030 [2024-07-14 18:46:56.237098] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:08.030 [2024-07-14 18:46:56.237108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237143] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:08.030 [2024-07-14 18:46:56.237181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237211] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.030 [2024-07-14 18:46:56.237234] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.030 [2024-07-14 18:46:56.237243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237315] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.030 [2024-07-14 18:46:56.237323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.030 [2024-07-14 18:46:56.237332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237415] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.030 [2024-07-14 18:46:56.237423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:08.030 [2024-07-14 18:46:56.237431] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:08.030 [2024-07-14 18:46:56.237455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237519] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237583] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:08.030 [2024-07-14 18:46:56.237593] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:08.030 [2024-07-14 18:46:56.237599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:08.030 [2024-07-14 18:46:56.237605] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:08.030 [2024-07-14 18:46:56.237614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:08.030 [2024-07-14 18:46:56.237625] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:08.030 [2024-07-14 18:46:56.237633] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:08.030 [2024-07-14 18:46:56.237641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237652] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:08.030 [2024-07-14 18:46:56.237659] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.030 [2024-07-14 18:46:56.237668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237679] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:08.030 [2024-07-14 18:46:56.237686] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:08.030 [2024-07-14 18:46:56.237695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:08.030 [2024-07-14 18:46:56.237705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:08.030 [2024-07-14 18:46:56.237753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:08.031 ===================================================== 00:15:08.031 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.031 ===================================================== 00:15:08.031 Controller Capabilities/Features 00:15:08.031 ================================ 00:15:08.031 Vendor ID: 4e58 00:15:08.031 Subsystem Vendor ID: 4e58 00:15:08.031 Serial Number: SPDK1 00:15:08.031 Model Number: SPDK bdev Controller 00:15:08.031 Firmware Version: 24.09 00:15:08.031 Recommended Arb Burst: 6 00:15:08.031 IEEE OUI Identifier: 8d 6b 50 00:15:08.031 Multi-path I/O 00:15:08.031 May have multiple subsystem ports: Yes 00:15:08.031 May have multiple controllers: Yes 00:15:08.031 Associated with SR-IOV VF: No 00:15:08.031 Max Data Transfer Size: 131072 00:15:08.031 Max Number of Namespaces: 32 00:15:08.031 Max Number of I/O Queues: 127 00:15:08.031 NVMe Specification Version (VS): 1.3 00:15:08.031 NVMe Specification Version (Identify): 1.3 00:15:08.031 Maximum Queue Entries: 256 00:15:08.031 Contiguous Queues Required: Yes 00:15:08.031 Arbitration Mechanisms Supported 00:15:08.031 Weighted Round Robin: Not Supported 00:15:08.031 Vendor Specific: Not Supported 00:15:08.031 Reset Timeout: 15000 ms 00:15:08.031 Doorbell Stride: 4 bytes 00:15:08.031 NVM Subsystem Reset: Not Supported 00:15:08.031 Command Sets Supported 00:15:08.031 NVM Command Set: Supported 00:15:08.031 Boot Partition: Not Supported 00:15:08.031 Memory Page Size Minimum: 4096 bytes 00:15:08.031 Memory Page Size Maximum: 4096 bytes 00:15:08.031 Persistent Memory Region: Not Supported 00:15:08.031 Optional Asynchronous Events Supported 00:15:08.031 Namespace Attribute Notices: Supported 00:15:08.031 Firmware Activation Notices: Not Supported 00:15:08.031 ANA Change Notices: Not Supported 00:15:08.031 PLE Aggregate Log Change Notices: Not Supported 00:15:08.031 LBA Status Info Alert Notices: Not Supported 00:15:08.031 EGE Aggregate Log Change Notices: Not Supported 00:15:08.031 Normal NVM Subsystem Shutdown event: Not Supported 00:15:08.031 Zone Descriptor Change Notices: Not Supported 00:15:08.031 Discovery Log Change Notices: Not Supported 00:15:08.031 Controller Attributes 00:15:08.031 128-bit Host Identifier: Supported 00:15:08.031 Non-Operational Permissive Mode: Not Supported 00:15:08.031 NVM Sets: Not Supported 00:15:08.031 Read Recovery Levels: Not Supported 00:15:08.031 Endurance Groups: Not Supported 00:15:08.031 Predictable Latency Mode: Not Supported 00:15:08.031 Traffic Based Keep ALive: Not Supported 00:15:08.031 Namespace Granularity: Not Supported 00:15:08.031 SQ Associations: Not Supported 00:15:08.031 UUID List: Not Supported 00:15:08.031 Multi-Domain Subsystem: Not Supported 00:15:08.031 Fixed Capacity Management: Not Supported 00:15:08.031 Variable Capacity Management: Not Supported 00:15:08.031 Delete Endurance Group: Not Supported 00:15:08.031 Delete NVM Set: Not Supported 00:15:08.031 Extended LBA Formats Supported: Not Supported 00:15:08.031 Flexible Data Placement Supported: Not Supported 00:15:08.031 00:15:08.031 Controller Memory Buffer Support 00:15:08.031 ================================ 00:15:08.031 Supported: No 00:15:08.031 00:15:08.031 Persistent Memory Region Support 00:15:08.031 ================================ 00:15:08.031 Supported: No 00:15:08.031 00:15:08.031 Admin Command Set Attributes 00:15:08.031 ============================ 00:15:08.031 Security Send/Receive: Not Supported 00:15:08.031 Format NVM: Not Supported 00:15:08.031 Firmware Activate/Download: Not Supported 00:15:08.031 Namespace Management: Not Supported 00:15:08.031 Device Self-Test: Not Supported 00:15:08.031 Directives: Not Supported 00:15:08.031 NVMe-MI: Not Supported 00:15:08.031 Virtualization Management: Not Supported 00:15:08.031 Doorbell Buffer Config: Not Supported 00:15:08.031 Get LBA Status Capability: Not Supported 00:15:08.031 Command & Feature Lockdown Capability: Not Supported 00:15:08.031 Abort Command Limit: 4 00:15:08.031 Async Event Request Limit: 4 00:15:08.031 Number of Firmware Slots: N/A 00:15:08.031 Firmware Slot 1 Read-Only: N/A 00:15:08.031 Firmware Activation Without Reset: N/A 00:15:08.031 Multiple Update Detection Support: N/A 00:15:08.031 Firmware Update Granularity: No Information Provided 00:15:08.031 Per-Namespace SMART Log: No 00:15:08.031 Asymmetric Namespace Access Log Page: Not Supported 00:15:08.031 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:08.031 Command Effects Log Page: Supported 00:15:08.031 Get Log Page Extended Data: Supported 00:15:08.031 Telemetry Log Pages: Not Supported 00:15:08.031 Persistent Event Log Pages: Not Supported 00:15:08.031 Supported Log Pages Log Page: May Support 00:15:08.031 Commands Supported & Effects Log Page: Not Supported 00:15:08.031 Feature Identifiers & Effects Log Page:May Support 00:15:08.031 NVMe-MI Commands & Effects Log Page: May Support 00:15:08.031 Data Area 4 for Telemetry Log: Not Supported 00:15:08.031 Error Log Page Entries Supported: 128 00:15:08.031 Keep Alive: Supported 00:15:08.031 Keep Alive Granularity: 10000 ms 00:15:08.031 00:15:08.031 NVM Command Set Attributes 00:15:08.031 ========================== 00:15:08.031 Submission Queue Entry Size 00:15:08.031 Max: 64 00:15:08.031 Min: 64 00:15:08.031 Completion Queue Entry Size 00:15:08.031 Max: 16 00:15:08.031 Min: 16 00:15:08.031 Number of Namespaces: 32 00:15:08.031 Compare Command: Supported 00:15:08.031 Write Uncorrectable Command: Not Supported 00:15:08.031 Dataset Management Command: Supported 00:15:08.031 Write Zeroes Command: Supported 00:15:08.031 Set Features Save Field: Not Supported 00:15:08.031 Reservations: Not Supported 00:15:08.031 Timestamp: Not Supported 00:15:08.031 Copy: Supported 00:15:08.031 Volatile Write Cache: Present 00:15:08.031 Atomic Write Unit (Normal): 1 00:15:08.031 Atomic Write Unit (PFail): 1 00:15:08.031 Atomic Compare & Write Unit: 1 00:15:08.031 Fused Compare & Write: Supported 00:15:08.031 Scatter-Gather List 00:15:08.031 SGL Command Set: Supported (Dword aligned) 00:15:08.031 SGL Keyed: Not Supported 00:15:08.031 SGL Bit Bucket Descriptor: Not Supported 00:15:08.031 SGL Metadata Pointer: Not Supported 00:15:08.031 Oversized SGL: Not Supported 00:15:08.031 SGL Metadata Address: Not Supported 00:15:08.031 SGL Offset: Not Supported 00:15:08.031 Transport SGL Data Block: Not Supported 00:15:08.031 Replay Protected Memory Block: Not Supported 00:15:08.031 00:15:08.031 Firmware Slot Information 00:15:08.031 ========================= 00:15:08.031 Active slot: 1 00:15:08.031 Slot 1 Firmware Revision: 24.09 00:15:08.031 00:15:08.031 00:15:08.031 Commands Supported and Effects 00:15:08.031 ============================== 00:15:08.031 Admin Commands 00:15:08.031 -------------- 00:15:08.031 Get Log Page (02h): Supported 00:15:08.031 Identify (06h): Supported 00:15:08.031 Abort (08h): Supported 00:15:08.031 Set Features (09h): Supported 00:15:08.031 Get Features (0Ah): Supported 00:15:08.031 Asynchronous Event Request (0Ch): Supported 00:15:08.031 Keep Alive (18h): Supported 00:15:08.031 I/O Commands 00:15:08.031 ------------ 00:15:08.031 Flush (00h): Supported LBA-Change 00:15:08.031 Write (01h): Supported LBA-Change 00:15:08.031 Read (02h): Supported 00:15:08.031 Compare (05h): Supported 00:15:08.031 Write Zeroes (08h): Supported LBA-Change 00:15:08.031 Dataset Management (09h): Supported LBA-Change 00:15:08.031 Copy (19h): Supported LBA-Change 00:15:08.031 00:15:08.031 Error Log 00:15:08.031 ========= 00:15:08.031 00:15:08.031 Arbitration 00:15:08.031 =========== 00:15:08.031 Arbitration Burst: 1 00:15:08.031 00:15:08.031 Power Management 00:15:08.031 ================ 00:15:08.031 Number of Power States: 1 00:15:08.031 Current Power State: Power State #0 00:15:08.031 Power State #0: 00:15:08.031 Max Power: 0.00 W 00:15:08.031 Non-Operational State: Operational 00:15:08.031 Entry Latency: Not Reported 00:15:08.031 Exit Latency: Not Reported 00:15:08.031 Relative Read Throughput: 0 00:15:08.031 Relative Read Latency: 0 00:15:08.031 Relative Write Throughput: 0 00:15:08.031 Relative Write Latency: 0 00:15:08.031 Idle Power: Not Reported 00:15:08.031 Active Power: Not Reported 00:15:08.031 Non-Operational Permissive Mode: Not Supported 00:15:08.031 00:15:08.031 Health Information 00:15:08.031 ================== 00:15:08.031 Critical Warnings: 00:15:08.031 Available Spare Space: OK 00:15:08.031 Temperature: OK 00:15:08.031 Device Reliability: OK 00:15:08.031 Read Only: No 00:15:08.031 Volatile Memory Backup: OK 00:15:08.031 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:08.031 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:08.031 Available Spare: 0% 00:15:08.031 Available Sp[2024-07-14 18:46:56.237891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:08.031 [2024-07-14 18:46:56.237910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:08.031 [2024-07-14 18:46:56.237954] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:08.031 [2024-07-14 18:46:56.237973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.032 [2024-07-14 18:46:56.237984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.032 [2024-07-14 18:46:56.237994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.032 [2024-07-14 18:46:56.238004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.032 [2024-07-14 18:46:56.240889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:08.032 [2024-07-14 18:46:56.240914] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:08.032 [2024-07-14 18:46:56.241537] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.032 [2024-07-14 18:46:56.241627] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:08.032 [2024-07-14 18:46:56.241641] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:08.032 [2024-07-14 18:46:56.242547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:08.032 [2024-07-14 18:46:56.242569] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:08.032 [2024-07-14 18:46:56.242621] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:08.032 [2024-07-14 18:46:56.244583] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.289 are Threshold: 0% 00:15:08.289 Life Percentage Used: 0% 00:15:08.289 Data Units Read: 0 00:15:08.289 Data Units Written: 0 00:15:08.289 Host Read Commands: 0 00:15:08.289 Host Write Commands: 0 00:15:08.289 Controller Busy Time: 0 minutes 00:15:08.289 Power Cycles: 0 00:15:08.289 Power On Hours: 0 hours 00:15:08.289 Unsafe Shutdowns: 0 00:15:08.289 Unrecoverable Media Errors: 0 00:15:08.289 Lifetime Error Log Entries: 0 00:15:08.289 Warning Temperature Time: 0 minutes 00:15:08.289 Critical Temperature Time: 0 minutes 00:15:08.289 00:15:08.289 Number of Queues 00:15:08.289 ================ 00:15:08.289 Number of I/O Submission Queues: 127 00:15:08.289 Number of I/O Completion Queues: 127 00:15:08.289 00:15:08.289 Active Namespaces 00:15:08.289 ================= 00:15:08.289 Namespace ID:1 00:15:08.289 Error Recovery Timeout: Unlimited 00:15:08.289 Command Set Identifier: NVM (00h) 00:15:08.289 Deallocate: Supported 00:15:08.289 Deallocated/Unwritten Error: Not Supported 00:15:08.289 Deallocated Read Value: Unknown 00:15:08.289 Deallocate in Write Zeroes: Not Supported 00:15:08.289 Deallocated Guard Field: 0xFFFF 00:15:08.289 Flush: Supported 00:15:08.289 Reservation: Supported 00:15:08.289 Namespace Sharing Capabilities: Multiple Controllers 00:15:08.289 Size (in LBAs): 131072 (0GiB) 00:15:08.289 Capacity (in LBAs): 131072 (0GiB) 00:15:08.289 Utilization (in LBAs): 131072 (0GiB) 00:15:08.289 NGUID: A790C0ED38434A7FB8EAF293897C84E6 00:15:08.289 UUID: a790c0ed-3843-4a7f-b8ea-f293897c84e6 00:15:08.289 Thin Provisioning: Not Supported 00:15:08.289 Per-NS Atomic Units: Yes 00:15:08.289 Atomic Boundary Size (Normal): 0 00:15:08.289 Atomic Boundary Size (PFail): 0 00:15:08.289 Atomic Boundary Offset: 0 00:15:08.289 Maximum Single Source Range Length: 65535 00:15:08.289 Maximum Copy Length: 65535 00:15:08.289 Maximum Source Range Count: 1 00:15:08.289 NGUID/EUI64 Never Reused: No 00:15:08.289 Namespace Write Protected: No 00:15:08.289 Number of LBA Formats: 1 00:15:08.289 Current LBA Format: LBA Format #00 00:15:08.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:08.289 00:15:08.289 18:46:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:08.289 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.289 [2024-07-14 18:46:56.474715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.552 Initializing NVMe Controllers 00:15:13.552 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:13.552 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:13.552 Initialization complete. Launching workers. 00:15:13.552 ======================================================== 00:15:13.552 Latency(us) 00:15:13.552 Device Information : IOPS MiB/s Average min max 00:15:13.552 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35502.75 138.68 3604.79 1164.09 8601.37 00:15:13.552 ======================================================== 00:15:13.552 Total : 35502.75 138.68 3604.79 1164.09 8601.37 00:15:13.552 00:15:13.552 [2024-07-14 18:47:01.494815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.552 18:47:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:13.552 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.552 [2024-07-14 18:47:01.738003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.823 Initializing NVMe Controllers 00:15:18.823 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:18.823 Initialization complete. Launching workers. 00:15:18.823 ======================================================== 00:15:18.823 Latency(us) 00:15:18.823 Device Information : IOPS MiB/s Average min max 00:15:18.823 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.21 6971.89 14966.91 00:15:18.823 ======================================================== 00:15:18.823 Total : 16025.60 62.60 7997.21 6971.89 14966.91 00:15:18.823 00:15:18.823 [2024-07-14 18:47:06.775432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.823 18:47:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:18.823 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.823 [2024-07-14 18:47:06.984457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:24.085 [2024-07-14 18:47:12.058288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:24.085 Initializing NVMe Controllers 00:15:24.085 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:24.085 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:24.085 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:24.085 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:24.085 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:24.085 Initialization complete. Launching workers. 00:15:24.085 Starting thread on core 2 00:15:24.085 Starting thread on core 3 00:15:24.085 Starting thread on core 1 00:15:24.085 18:47:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:24.085 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.343 [2024-07-14 18:47:12.367388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.683 [2024-07-14 18:47:15.426557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.683 Initializing NVMe Controllers 00:15:27.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:27.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:27.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:27.683 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:27.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:27.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:27.683 Initialization complete. Launching workers. 00:15:27.683 Starting thread on core 1 with urgent priority queue 00:15:27.683 Starting thread on core 2 with urgent priority queue 00:15:27.683 Starting thread on core 3 with urgent priority queue 00:15:27.683 Starting thread on core 0 with urgent priority queue 00:15:27.683 SPDK bdev Controller (SPDK1 ) core 0: 5314.67 IO/s 18.82 secs/100000 ios 00:15:27.683 SPDK bdev Controller (SPDK1 ) core 1: 5530.67 IO/s 18.08 secs/100000 ios 00:15:27.683 SPDK bdev Controller (SPDK1 ) core 2: 5580.33 IO/s 17.92 secs/100000 ios 00:15:27.683 SPDK bdev Controller (SPDK1 ) core 3: 5618.00 IO/s 17.80 secs/100000 ios 00:15:27.683 ======================================================== 00:15:27.683 00:15:27.683 18:47:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:27.683 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.683 [2024-07-14 18:47:15.726577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.683 Initializing NVMe Controllers 00:15:27.683 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.683 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.683 Namespace ID: 1 size: 0GB 00:15:27.683 Initialization complete. 00:15:27.683 INFO: using host memory buffer for IO 00:15:27.683 Hello world! 00:15:27.683 [2024-07-14 18:47:15.760113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.683 18:47:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:27.683 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.940 [2024-07-14 18:47:16.060355] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.871 Initializing NVMe Controllers 00:15:28.871 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.871 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.871 Initialization complete. Launching workers. 00:15:28.871 submit (in ns) avg, min, max = 7758.1, 3495.6, 4015433.3 00:15:28.871 complete (in ns) avg, min, max = 22570.5, 2058.9, 4017070.0 00:15:28.871 00:15:28.871 Submit histogram 00:15:28.871 ================ 00:15:28.871 Range in us Cumulative Count 00:15:28.871 3.484 - 3.508: 0.0370% ( 5) 00:15:28.871 3.508 - 3.532: 0.3772% ( 46) 00:15:28.871 3.532 - 3.556: 1.0207% ( 87) 00:15:28.871 3.556 - 3.579: 4.1346% ( 421) 00:15:28.871 3.579 - 3.603: 8.4246% ( 580) 00:15:28.871 3.603 - 3.627: 15.4142% ( 945) 00:15:28.871 3.627 - 3.650: 23.8905% ( 1146) 00:15:28.871 3.650 - 3.674: 32.9216% ( 1221) 00:15:28.871 3.674 - 3.698: 40.2737% ( 994) 00:15:28.871 3.698 - 3.721: 47.4260% ( 967) 00:15:28.871 3.721 - 3.745: 52.4186% ( 675) 00:15:28.871 3.745 - 3.769: 56.8935% ( 605) 00:15:28.871 3.769 - 3.793: 60.6731% ( 511) 00:15:28.871 3.793 - 3.816: 64.2382% ( 482) 00:15:28.871 3.816 - 3.840: 67.6627% ( 463) 00:15:28.871 3.840 - 3.864: 72.1006% ( 600) 00:15:28.871 3.864 - 3.887: 76.4571% ( 589) 00:15:28.871 3.887 - 3.911: 80.1627% ( 501) 00:15:28.871 3.911 - 3.935: 83.4911% ( 450) 00:15:28.871 3.935 - 3.959: 85.8728% ( 322) 00:15:28.871 3.959 - 3.982: 87.6849% ( 245) 00:15:28.871 3.982 - 4.006: 89.4157% ( 234) 00:15:28.871 4.006 - 4.030: 90.6731% ( 170) 00:15:28.871 4.030 - 4.053: 91.7456% ( 145) 00:15:28.871 4.053 - 4.077: 92.8698% ( 152) 00:15:28.871 4.077 - 4.101: 93.7130% ( 114) 00:15:28.871 4.101 - 4.124: 94.3787% ( 90) 00:15:28.871 4.124 - 4.148: 95.0592% ( 92) 00:15:28.871 4.148 - 4.172: 95.5251% ( 63) 00:15:28.871 4.172 - 4.196: 95.9246% ( 54) 00:15:28.871 4.196 - 4.219: 96.1686% ( 33) 00:15:28.871 4.219 - 4.243: 96.3757% ( 28) 00:15:28.871 4.243 - 4.267: 96.5533% ( 24) 00:15:28.871 4.267 - 4.290: 96.6568% ( 14) 00:15:28.871 4.290 - 4.314: 96.7604% ( 14) 00:15:28.871 4.314 - 4.338: 96.8565% ( 13) 00:15:28.871 4.338 - 4.361: 96.9157% ( 8) 00:15:28.871 4.361 - 4.385: 96.9822% ( 9) 00:15:28.871 4.385 - 4.409: 96.9896% ( 1) 00:15:28.871 4.409 - 4.433: 97.0192% ( 4) 00:15:28.871 4.433 - 4.456: 97.0562% ( 5) 00:15:28.871 4.456 - 4.480: 97.0858% ( 4) 00:15:28.871 4.480 - 4.504: 97.1302% ( 6) 00:15:28.871 4.504 - 4.527: 97.1376% ( 1) 00:15:28.871 4.527 - 4.551: 97.1450% ( 1) 00:15:28.871 4.551 - 4.575: 97.1598% ( 2) 00:15:28.871 4.575 - 4.599: 97.1672% ( 1) 00:15:28.871 4.599 - 4.622: 97.1746% ( 1) 00:15:28.871 4.622 - 4.646: 97.1967% ( 3) 00:15:28.871 4.646 - 4.670: 97.2189% ( 3) 00:15:28.872 4.670 - 4.693: 97.2263% ( 1) 00:15:28.872 4.693 - 4.717: 97.2485% ( 3) 00:15:28.872 4.717 - 4.741: 97.2633% ( 2) 00:15:28.872 4.741 - 4.764: 97.3077% ( 6) 00:15:28.872 4.764 - 4.788: 97.3373% ( 4) 00:15:28.872 4.788 - 4.812: 97.3595% ( 3) 00:15:28.872 4.812 - 4.836: 97.3743% ( 2) 00:15:28.872 4.836 - 4.859: 97.3817% ( 1) 00:15:28.872 4.859 - 4.883: 97.4408% ( 8) 00:15:28.872 4.883 - 4.907: 97.5000% ( 8) 00:15:28.872 4.907 - 4.930: 97.5296% ( 4) 00:15:28.872 4.930 - 4.954: 97.5518% ( 3) 00:15:28.872 4.954 - 4.978: 97.5592% ( 1) 00:15:28.872 4.978 - 5.001: 97.6036% ( 6) 00:15:28.872 5.001 - 5.025: 97.6627% ( 8) 00:15:28.872 5.025 - 5.049: 97.6775% ( 2) 00:15:28.872 5.049 - 5.073: 97.7071% ( 4) 00:15:28.872 5.073 - 5.096: 97.7441% ( 5) 00:15:28.872 5.096 - 5.120: 97.7811% ( 5) 00:15:28.872 5.120 - 5.144: 97.7959% ( 2) 00:15:28.872 5.144 - 5.167: 97.8254% ( 4) 00:15:28.872 5.167 - 5.191: 97.8328% ( 1) 00:15:28.872 5.191 - 5.215: 97.8402% ( 1) 00:15:28.872 5.215 - 5.239: 97.8476% ( 1) 00:15:28.872 5.239 - 5.262: 97.8550% ( 1) 00:15:28.872 5.262 - 5.286: 97.8624% ( 1) 00:15:28.872 5.286 - 5.310: 97.8920% ( 4) 00:15:28.872 5.310 - 5.333: 97.8994% ( 1) 00:15:28.872 5.428 - 5.452: 97.9068% ( 1) 00:15:28.872 5.476 - 5.499: 97.9142% ( 1) 00:15:28.872 5.499 - 5.523: 97.9216% ( 1) 00:15:28.872 5.547 - 5.570: 97.9290% ( 1) 00:15:28.872 5.570 - 5.594: 97.9364% ( 1) 00:15:28.872 5.594 - 5.618: 97.9438% ( 1) 00:15:28.872 5.641 - 5.665: 97.9512% ( 1) 00:15:28.872 5.689 - 5.713: 97.9586% ( 1) 00:15:28.872 5.784 - 5.807: 97.9660% ( 1) 00:15:28.872 5.831 - 5.855: 97.9734% ( 1) 00:15:28.872 5.855 - 5.879: 97.9956% ( 3) 00:15:28.872 5.997 - 6.021: 98.0030% ( 1) 00:15:28.872 6.068 - 6.116: 98.0104% ( 1) 00:15:28.872 6.116 - 6.163: 98.0178% ( 1) 00:15:28.872 6.163 - 6.210: 98.0251% ( 1) 00:15:28.872 6.210 - 6.258: 98.0325% ( 1) 00:15:28.872 6.305 - 6.353: 98.0399% ( 1) 00:15:28.872 6.400 - 6.447: 98.0473% ( 1) 00:15:28.872 6.542 - 6.590: 98.0547% ( 1) 00:15:28.872 6.590 - 6.637: 98.0621% ( 1) 00:15:28.872 6.827 - 6.874: 98.0695% ( 1) 00:15:28.872 6.874 - 6.921: 98.0843% ( 2) 00:15:28.872 7.016 - 7.064: 98.0917% ( 1) 00:15:28.872 7.111 - 7.159: 98.1065% ( 2) 00:15:28.872 7.159 - 7.206: 98.1213% ( 2) 00:15:28.872 7.253 - 7.301: 98.1287% ( 1) 00:15:28.872 7.348 - 7.396: 98.1361% ( 1) 00:15:28.872 7.396 - 7.443: 98.1435% ( 1) 00:15:28.872 7.443 - 7.490: 98.1509% ( 1) 00:15:28.872 7.490 - 7.538: 98.1583% ( 1) 00:15:28.872 7.538 - 7.585: 98.1805% ( 3) 00:15:28.872 7.585 - 7.633: 98.2027% ( 3) 00:15:28.872 7.680 - 7.727: 98.2101% ( 1) 00:15:28.872 7.727 - 7.775: 98.2249% ( 2) 00:15:28.872 7.775 - 7.822: 98.2322% ( 1) 00:15:28.872 7.822 - 7.870: 98.2470% ( 2) 00:15:28.872 7.870 - 7.917: 98.2544% ( 1) 00:15:28.872 7.917 - 7.964: 98.2692% ( 2) 00:15:28.872 8.012 - 8.059: 98.2840% ( 2) 00:15:28.872 8.059 - 8.107: 98.2988% ( 2) 00:15:28.872 8.107 - 8.154: 98.3284% ( 4) 00:15:28.872 8.201 - 8.249: 98.3358% ( 1) 00:15:28.872 8.249 - 8.296: 98.3432% ( 1) 00:15:28.872 8.296 - 8.344: 98.3506% ( 1) 00:15:28.872 8.344 - 8.391: 98.3580% ( 1) 00:15:28.872 8.391 - 8.439: 98.3654% ( 1) 00:15:28.872 8.439 - 8.486: 98.3728% ( 1) 00:15:28.872 8.533 - 8.581: 98.3802% ( 1) 00:15:28.872 8.581 - 8.628: 98.3876% ( 1) 00:15:28.872 8.628 - 8.676: 98.4024% ( 2) 00:15:28.872 8.676 - 8.723: 98.4098% ( 1) 00:15:28.872 8.770 - 8.818: 98.4172% ( 1) 00:15:28.872 8.818 - 8.865: 98.4246% ( 1) 00:15:28.872 8.865 - 8.913: 98.4320% ( 1) 00:15:28.872 8.960 - 9.007: 98.4393% ( 1) 00:15:28.872 9.055 - 9.102: 98.4467% ( 1) 00:15:28.872 9.339 - 9.387: 98.4615% ( 2) 00:15:28.872 9.387 - 9.434: 98.4689% ( 1) 00:15:28.872 9.434 - 9.481: 98.4763% ( 1) 00:15:28.872 9.481 - 9.529: 98.4837% ( 1) 00:15:28.872 9.529 - 9.576: 98.4911% ( 1) 00:15:28.872 9.719 - 9.766: 98.5059% ( 2) 00:15:28.872 9.766 - 9.813: 98.5133% ( 1) 00:15:28.872 9.813 - 9.861: 98.5207% ( 1) 00:15:28.872 9.861 - 9.908: 98.5281% ( 1) 00:15:28.872 9.956 - 10.003: 98.5355% ( 1) 00:15:28.872 10.003 - 10.050: 98.5429% ( 1) 00:15:28.872 10.145 - 10.193: 98.5503% ( 1) 00:15:28.872 10.193 - 10.240: 98.5577% ( 1) 00:15:28.872 10.430 - 10.477: 98.5725% ( 2) 00:15:28.872 10.477 - 10.524: 98.5799% ( 1) 00:15:28.872 10.572 - 10.619: 98.5873% ( 1) 00:15:28.872 10.667 - 10.714: 98.5947% ( 1) 00:15:28.872 10.809 - 10.856: 98.6021% ( 1) 00:15:28.872 10.904 - 10.951: 98.6095% ( 1) 00:15:28.872 11.188 - 11.236: 98.6169% ( 1) 00:15:28.872 11.236 - 11.283: 98.6243% ( 1) 00:15:28.872 11.330 - 11.378: 98.6317% ( 1) 00:15:28.872 11.378 - 11.425: 98.6538% ( 3) 00:15:28.872 11.804 - 11.852: 98.6612% ( 1) 00:15:28.872 11.947 - 11.994: 98.6686% ( 1) 00:15:28.872 12.089 - 12.136: 98.6760% ( 1) 00:15:28.872 12.136 - 12.231: 98.6834% ( 1) 00:15:28.872 12.421 - 12.516: 98.7056% ( 3) 00:15:28.872 12.516 - 12.610: 98.7204% ( 2) 00:15:28.872 12.705 - 12.800: 98.7426% ( 3) 00:15:28.872 12.800 - 12.895: 98.7500% ( 1) 00:15:28.872 12.895 - 12.990: 98.7574% ( 1) 00:15:28.872 12.990 - 13.084: 98.7722% ( 2) 00:15:28.872 13.274 - 13.369: 98.7796% ( 1) 00:15:28.872 13.369 - 13.464: 98.7870% ( 1) 00:15:28.872 13.464 - 13.559: 98.7944% ( 1) 00:15:28.872 13.559 - 13.653: 98.8018% ( 1) 00:15:28.872 13.653 - 13.748: 98.8166% ( 2) 00:15:28.872 13.843 - 13.938: 98.8240% ( 1) 00:15:28.872 14.033 - 14.127: 98.8388% ( 2) 00:15:28.872 14.127 - 14.222: 98.8462% ( 1) 00:15:28.872 14.222 - 14.317: 98.8536% ( 1) 00:15:28.872 14.412 - 14.507: 98.8609% ( 1) 00:15:28.872 14.507 - 14.601: 98.8757% ( 2) 00:15:28.872 14.601 - 14.696: 98.8831% ( 1) 00:15:28.872 14.696 - 14.791: 98.8979% ( 2) 00:15:28.872 14.981 - 15.076: 98.9053% ( 1) 00:15:28.872 15.644 - 15.739: 98.9127% ( 1) 00:15:28.872 16.972 - 17.067: 98.9201% ( 1) 00:15:28.872 17.161 - 17.256: 98.9423% ( 3) 00:15:28.872 17.256 - 17.351: 98.9645% ( 3) 00:15:28.872 17.351 - 17.446: 98.9867% ( 3) 00:15:28.872 17.446 - 17.541: 99.0015% ( 2) 00:15:28.872 17.541 - 17.636: 99.0163% ( 2) 00:15:28.872 17.636 - 17.730: 99.0754% ( 8) 00:15:28.872 17.730 - 17.825: 99.1198% ( 6) 00:15:28.872 17.825 - 17.920: 99.1568% ( 5) 00:15:28.872 17.920 - 18.015: 99.2899% ( 18) 00:15:28.872 18.015 - 18.110: 99.3343% ( 6) 00:15:28.872 18.110 - 18.204: 99.4157% ( 11) 00:15:28.872 18.204 - 18.299: 99.4822% ( 9) 00:15:28.872 18.299 - 18.394: 99.5266% ( 6) 00:15:28.872 18.394 - 18.489: 99.5488% ( 3) 00:15:28.872 18.489 - 18.584: 99.6228% ( 10) 00:15:28.872 18.584 - 18.679: 99.6598% ( 5) 00:15:28.872 18.679 - 18.773: 99.6820% ( 3) 00:15:28.872 18.773 - 18.868: 99.7115% ( 4) 00:15:28.872 18.868 - 18.963: 99.7411% ( 4) 00:15:28.872 18.963 - 19.058: 99.7485% ( 1) 00:15:28.872 19.058 - 19.153: 99.7633% ( 2) 00:15:28.872 19.153 - 19.247: 99.7781% ( 2) 00:15:28.872 19.247 - 19.342: 99.7855% ( 1) 00:15:28.872 19.342 - 19.437: 99.7929% ( 1) 00:15:28.872 19.437 - 19.532: 99.8225% ( 4) 00:15:28.872 19.532 - 19.627: 99.8299% ( 1) 00:15:28.872 19.721 - 19.816: 99.8373% ( 1) 00:15:28.872 19.816 - 19.911: 99.8447% ( 1) 00:15:28.872 19.911 - 20.006: 99.8521% ( 1) 00:15:28.872 20.006 - 20.101: 99.8595% ( 1) 00:15:28.872 21.807 - 21.902: 99.8669% ( 1) 00:15:28.872 22.661 - 22.756: 99.8743% ( 1) 00:15:28.872 23.324 - 23.419: 99.8817% ( 1) 00:15:28.872 25.410 - 25.600: 99.8891% ( 1) 00:15:28.872 26.359 - 26.548: 99.8964% ( 1) 00:15:28.872 30.341 - 30.530: 99.9038% ( 1) 00:15:28.872 3009.801 - 3021.938: 99.9112% ( 1) 00:15:28.872 3980.705 - 4004.978: 99.9630% ( 7) 00:15:28.872 4004.978 - 4029.250: 100.0000% ( 5) 00:15:28.872 00:15:28.872 Complete histogram 00:15:28.872 ================== 00:15:28.872 Range in us Cumulative Count 00:15:28.872 2.050 - 2.062: 0.0888% ( 12) 00:15:28.872 2.062 - 2.074: 27.7959% ( 3746) 00:15:28.872 2.074 - 2.086: 41.1982% ( 1812) 00:15:28.872 2.086 - 2.098: 43.6317% ( 329) 00:15:28.872 2.098 - 2.110: 57.2115% ( 1836) 00:15:28.872 2.110 - 2.121: 60.7840% ( 483) 00:15:28.872 2.121 - 2.133: 63.4024% ( 354) 00:15:28.872 2.133 - 2.145: 73.0251% ( 1301) 00:15:28.872 2.145 - 2.157: 75.3107% ( 309) 00:15:28.872 2.157 - 2.169: 77.1746% ( 252) 00:15:28.872 2.169 - 2.181: 80.5547% ( 457) 00:15:28.872 2.181 - 2.193: 81.8195% ( 171) 00:15:28.872 2.193 - 2.204: 82.8846% ( 144) 00:15:28.872 2.204 - 2.216: 87.1228% ( 573) 00:15:28.872 2.216 - 2.228: 89.0976% ( 267) 00:15:28.872 2.228 - 2.240: 91.1908% ( 283) 00:15:28.872 2.240 - 2.252: 93.1953% ( 271) 00:15:28.872 2.252 - 2.264: 93.8388% ( 87) 00:15:28.872 2.264 - 2.276: 94.0680% ( 31) 00:15:28.872 2.276 - 2.287: 94.3491% ( 38) 00:15:28.872 2.287 - 2.299: 94.8521% ( 68) 00:15:28.872 2.299 - 2.311: 95.5104% ( 89) 00:15:28.872 2.311 - 2.323: 95.7470% ( 32) 00:15:28.872 2.323 - 2.335: 95.8654% ( 16) 00:15:28.872 2.335 - 2.347: 95.9985% ( 18) 00:15:28.872 2.347 - 2.359: 96.1982% ( 27) 00:15:28.873 2.359 - 2.370: 96.5089% ( 42) 00:15:28.873 2.370 - 2.382: 96.8935% ( 52) 00:15:28.873 2.382 - 2.394: 97.2559% ( 49) 00:15:28.873 2.394 - 2.406: 97.5000% ( 33) 00:15:28.873 2.406 - 2.418: 97.6775% ( 24) 00:15:28.873 2.418 - 2.430: 97.7737% ( 13) 00:15:28.873 2.430 - 2.441: 97.8698% ( 13) 00:15:28.873 2.441 - 2.453: 98.0473% ( 24) 00:15:28.873 2.453 - 2.465: 98.1731% ( 17) 00:15:28.873 2.465 - 2.477: 98.2175% ( 6) 00:15:28.873 2.477 - 2.489: 98.2692% ( 7) 00:15:28.873 2.489 - 2.501: 98.3284% ( 8) 00:15:28.873 2.501 - 2.513: 98.3802% ( 7) 00:15:28.873 2.513 - 2.524: 98.4172% ( 5) 00:15:28.873 2.524 - 2.536: 98.4320% ( 2) 00:15:28.873 2.536 - 2.548: 98.4393% ( 1) 00:15:28.873 2.548 - 2.560: 98.4541% ( 2) 00:15:28.873 2.560 - 2.572: 98.4615% ( 1) 00:15:28.873 2.572 - 2.584: 98.4689% ( 1) 00:15:28.873 2.596 - 2.607: 98.4837% ( 2) 00:15:28.873 2.619 - 2.631: 98.4911% ( 1) 00:15:28.873 2.643 - 2.655: 98.4985% ( 1) 00:15:28.873 2.679 - 2.690: 98.5059% ( 1) 00:15:28.873 2.738 - 2.750: 98.5133% ( 1) 00:15:28.873 2.773 - 2.785: 98.5207% ( 1) 00:15:28.873 2.785 - 2.797: 98.5281% ( 1) 00:15:28.873 3.176 - 3.200: 98.5355% ( 1) 00:15:28.873 3.200 - 3.224: 98.5503% ( 2) 00:15:28.873 3.295 - 3.319: 98.5651% ( 2) 00:15:28.873 3.319 - 3.342: 98.5799% ( 2) 00:15:28.873 3.342 - 3.366: 98.6021% ( 3) 00:15:28.873 3.366 - 3.390: 98.6169% ( 2) 00:15:28.873 3.390 - 3.413: 98.6243% ( 1) 00:15:28.873 3.413 - 3.437: 98.6391% ( 2) 00:15:28.873 3.484 - 3.508: 98.6686% ( 4) 00:15:28.873 3.508 - 3.532: 98.6834% ( 2) 00:15:28.873 3.579 - 3.603: 98.6982% ( 2) 00:15:28.873 3.627 - 3.650: 98.7056% ( 1) 00:15:28.873 3.674 - 3.698: 98.7130% ( 1) 00:15:28.873 3.698 - 3.721: 98.7278% ( 2) 00:15:28.873 3.769 - 3.793: 98.7352% ( 1) 00:15:28.873 3.793 - 3.816: 98.7426% ( 1) 00:15:28.873 3.816 - 3.840: 98.7500% ( 1) 00:15:28.873 3.840 - 3.864: 98.7648% ( 2) 00:15:28.873 3.864 - 3.887: 98.7722% ( 1) 00:15:28.873 3.911 - 3.935: 98.7796% ( 1) 00:15:28.873 3.982 - 4.006: 98.7870% ( 1) 00:15:28.873 4.575 - 4.599: 98.7944% ( 1) 00:15:28.873 4.883 - 4.907: 98.8018% ( 1) 00:15:28.873 5.381 - 5.404: 98.8092% ( 1) 00:15:28.873 5.476 - 5.499: 98.8166% ( 1) 00:15:28.873 5.499 - 5.523: 98.8240% ( 1) 00:15:28.873 5.570 - 5.594: 98.8314% ( 1) 00:15:28.873 5.760 - 5.784: 98.8388% ( 1) 00:15:28.873 5.831 - 5.855: 98.8462% ( 1) 00:15:28.873 5.855 - 5.879: 98.8536% ( 1) 00:15:28.873 5.879 - 5.902: 98.8609% ( 1) 00:15:28.873 5.902 - 5.926: 98.8683% ( 1) 00:15:28.873 5.926 - 5.950: 98.8757% ( 1) 00:15:28.873 5.973 - 5.997: 98.8831% ( 1) 00:15:28.873 6.068 - 6.116: 98.8905% ( 1) 00:15:28.873 6.116 - 6.163: 98.8979% ( 1) 00:15:28.873 6.305 - 6.353: 98.9053% ( 1) 00:15:28.873 6.447 - 6.495: 98.9127% ( 1) 00:15:28.873 6.495 - 6.542: 98.9201% ( 1) 00:15:28.873 6.542 - 6.590: 98.9275% ( 1) 00:15:28.873 6.637 - 6.684: 98.9349% ( 1) 00:15:28.873 6.732 - 6.779: 98.9423% ( 1) 00:15:28.873 7.301 - 7.348: 98.9497% ( 1) 00:15:28.873 7.538 - 7.585: 98.9571% ( 1) 00:15:28.873 8.581 - 8.628: 98.9645% ( 1) 00:15:28.873 8.723 - 8.770: 98.9719% ( 1) 00:15:28.873 10.761 - 10.809: 98.9793% ( 1) 00:15:28.873 15.455 - 15.550: 98.9867% ( 1) 00:15:28.873 15.550 - 15.644: 98.9941% ( 1) 00:15:28.873 15.644 - 15.739: 99.0015% ( 1) 00:15:28.873 15.834 - 15.929: 99.0089% ( 1) 00:15:28.873 15.929 - 16.024: 99.0459% ( 5) 00:15:28.873 16.024 - 16.119: 99.0533% ( 1) 00:15:28.873 16.119 - 16.213: 99.0754% ( 3) 00:15:28.873 16.213 - 16.308: 99.1198% ( 6) 00:15:28.873 16.308 - 16.403: 99.1494% ( 4) 00:15:28.873 16.403 - 16.498: 99.1938% ( 6) 00:15:28.873 16.498 - 16.593: 99.2751% ( 11) 00:15:28.873 16.593 - 16.687: 99.3195% ( 6) 00:15:28.873 16.687 - 16.782: 99.3343% ( 2) 00:15:28.873 16.782 - 16.877: 99.3639% ( 4) 00:15:28.873 16.877 - 16.972: 99.3861% ( 3) 00:15:28.873 16.972 - 17.067: 99.3935% ( 1) 00:15:28.873 17.067 - 17.161: 99.4083% ( 2) 00:15:28.873 17.161 - 17.256: 99.4231%[2024-07-14 18:47:17.085493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.130 ( 2) 00:15:29.130 17.351 - 17.446: 99.4305% ( 1) 00:15:29.130 17.446 - 17.541: 99.4379% ( 1) 00:15:29.130 17.541 - 17.636: 99.4453% ( 1) 00:15:29.130 18.015 - 18.110: 99.4527% ( 1) 00:15:29.130 18.394 - 18.489: 99.4675% ( 2) 00:15:29.130 18.584 - 18.679: 99.4822% ( 2) 00:15:29.130 20.954 - 21.049: 99.4896% ( 1) 00:15:29.130 3228.255 - 3252.527: 99.4970% ( 1) 00:15:29.130 3980.705 - 4004.978: 99.9038% ( 55) 00:15:29.130 4004.978 - 4029.250: 100.0000% ( 13) 00:15:29.130 00:15:29.130 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:29.130 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:29.130 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:29.130 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:29.130 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.398 [ 00:15:29.398 { 00:15:29.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:29.398 "subtype": "Discovery", 00:15:29.398 "listen_addresses": [], 00:15:29.398 "allow_any_host": true, 00:15:29.398 "hosts": [] 00:15:29.398 }, 00:15:29.398 { 00:15:29.398 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:29.398 "subtype": "NVMe", 00:15:29.398 "listen_addresses": [ 00:15:29.398 { 00:15:29.398 "trtype": "VFIOUSER", 00:15:29.398 "adrfam": "IPv4", 00:15:29.398 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:29.398 "trsvcid": "0" 00:15:29.398 } 00:15:29.398 ], 00:15:29.398 "allow_any_host": true, 00:15:29.398 "hosts": [], 00:15:29.398 "serial_number": "SPDK1", 00:15:29.398 "model_number": "SPDK bdev Controller", 00:15:29.398 "max_namespaces": 32, 00:15:29.398 "min_cntlid": 1, 00:15:29.398 "max_cntlid": 65519, 00:15:29.398 "namespaces": [ 00:15:29.398 { 00:15:29.398 "nsid": 1, 00:15:29.398 "bdev_name": "Malloc1", 00:15:29.398 "name": "Malloc1", 00:15:29.398 "nguid": "A790C0ED38434A7FB8EAF293897C84E6", 00:15:29.398 "uuid": "a790c0ed-3843-4a7f-b8ea-f293897c84e6" 00:15:29.398 } 00:15:29.398 ] 00:15:29.398 }, 00:15:29.398 { 00:15:29.398 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:29.398 "subtype": "NVMe", 00:15:29.398 "listen_addresses": [ 00:15:29.398 { 00:15:29.398 "trtype": "VFIOUSER", 00:15:29.398 "adrfam": "IPv4", 00:15:29.398 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:29.398 "trsvcid": "0" 00:15:29.398 } 00:15:29.398 ], 00:15:29.398 "allow_any_host": true, 00:15:29.398 "hosts": [], 00:15:29.398 "serial_number": "SPDK2", 00:15:29.398 "model_number": "SPDK bdev Controller", 00:15:29.398 "max_namespaces": 32, 00:15:29.398 "min_cntlid": 1, 00:15:29.398 "max_cntlid": 65519, 00:15:29.398 "namespaces": [ 00:15:29.398 { 00:15:29.398 "nsid": 1, 00:15:29.398 "bdev_name": "Malloc2", 00:15:29.398 "name": "Malloc2", 00:15:29.398 "nguid": "26C753863FA746AF8D43370E4B21ABCE", 00:15:29.398 "uuid": "26c75386-3fa7-46af-8d43-370e4b21abce" 00:15:29.398 } 00:15:29.398 ] 00:15:29.398 } 00:15:29.398 ] 00:15:29.398 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:29.398 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3560643 00:15:29.398 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:29.399 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:29.399 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.399 [2024-07-14 18:47:17.539389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:29.662 Malloc3 00:15:29.662 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:29.920 [2024-07-14 18:47:17.892949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.920 18:47:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:29.920 Asynchronous Event Request test 00:15:29.920 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.920 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:29.920 Registering asynchronous event callbacks... 00:15:29.920 Starting namespace attribute notice tests for all controllers... 00:15:29.920 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:29.920 aer_cb - Changed Namespace 00:15:29.920 Cleaning up... 00:15:30.179 [ 00:15:30.179 { 00:15:30.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.179 "subtype": "Discovery", 00:15:30.179 "listen_addresses": [], 00:15:30.179 "allow_any_host": true, 00:15:30.179 "hosts": [] 00:15:30.179 }, 00:15:30.179 { 00:15:30.179 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.179 "subtype": "NVMe", 00:15:30.179 "listen_addresses": [ 00:15:30.179 { 00:15:30.179 "trtype": "VFIOUSER", 00:15:30.179 "adrfam": "IPv4", 00:15:30.179 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.179 "trsvcid": "0" 00:15:30.179 } 00:15:30.179 ], 00:15:30.179 "allow_any_host": true, 00:15:30.179 "hosts": [], 00:15:30.179 "serial_number": "SPDK1", 00:15:30.179 "model_number": "SPDK bdev Controller", 00:15:30.179 "max_namespaces": 32, 00:15:30.179 "min_cntlid": 1, 00:15:30.179 "max_cntlid": 65519, 00:15:30.179 "namespaces": [ 00:15:30.179 { 00:15:30.179 "nsid": 1, 00:15:30.179 "bdev_name": "Malloc1", 00:15:30.179 "name": "Malloc1", 00:15:30.179 "nguid": "A790C0ED38434A7FB8EAF293897C84E6", 00:15:30.179 "uuid": "a790c0ed-3843-4a7f-b8ea-f293897c84e6" 00:15:30.179 }, 00:15:30.179 { 00:15:30.179 "nsid": 2, 00:15:30.179 "bdev_name": "Malloc3", 00:15:30.179 "name": "Malloc3", 00:15:30.179 "nguid": "80AAD45ADE87438CB5529BC4D14A112A", 00:15:30.179 "uuid": "80aad45a-de87-438c-b552-9bc4d14a112a" 00:15:30.179 } 00:15:30.179 ] 00:15:30.179 }, 00:15:30.179 { 00:15:30.179 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.179 "subtype": "NVMe", 00:15:30.179 "listen_addresses": [ 00:15:30.179 { 00:15:30.179 "trtype": "VFIOUSER", 00:15:30.179 "adrfam": "IPv4", 00:15:30.179 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.179 "trsvcid": "0" 00:15:30.179 } 00:15:30.179 ], 00:15:30.179 "allow_any_host": true, 00:15:30.179 "hosts": [], 00:15:30.179 "serial_number": "SPDK2", 00:15:30.179 "model_number": "SPDK bdev Controller", 00:15:30.179 "max_namespaces": 32, 00:15:30.179 "min_cntlid": 1, 00:15:30.179 "max_cntlid": 65519, 00:15:30.179 "namespaces": [ 00:15:30.179 { 00:15:30.179 "nsid": 1, 00:15:30.179 "bdev_name": "Malloc2", 00:15:30.179 "name": "Malloc2", 00:15:30.179 "nguid": "26C753863FA746AF8D43370E4B21ABCE", 00:15:30.179 "uuid": "26c75386-3fa7-46af-8d43-370e4b21abce" 00:15:30.179 } 00:15:30.179 ] 00:15:30.179 } 00:15:30.179 ] 00:15:30.179 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3560643 00:15:30.179 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.179 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:30.179 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:30.179 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:30.179 [2024-07-14 18:47:18.190082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:30.179 [2024-07-14 18:47:18.190124] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560761 ] 00:15:30.179 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.179 [2024-07-14 18:47:18.226108] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:30.179 [2024-07-14 18:47:18.228486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:30.179 [2024-07-14 18:47:18.228520] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcdeb2d1000 00:15:30.179 [2024-07-14 18:47:18.229478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.230483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.231490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.232495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.233500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.234502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.235507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.236514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:30.179 [2024-07-14 18:47:18.237526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:30.179 [2024-07-14 18:47:18.237547] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcdea085000 00:15:30.179 [2024-07-14 18:47:18.238702] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:30.179 [2024-07-14 18:47:18.252544] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:30.179 [2024-07-14 18:47:18.252577] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:30.179 [2024-07-14 18:47:18.257690] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:30.179 [2024-07-14 18:47:18.257738] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:30.179 [2024-07-14 18:47:18.257824] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:30.179 [2024-07-14 18:47:18.257847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:30.179 [2024-07-14 18:47:18.257857] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:30.179 [2024-07-14 18:47:18.258694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:30.180 [2024-07-14 18:47:18.258714] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:30.180 [2024-07-14 18:47:18.258726] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:30.180 [2024-07-14 18:47:18.259703] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:30.180 [2024-07-14 18:47:18.259722] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:30.180 [2024-07-14 18:47:18.259735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.260710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:30.180 [2024-07-14 18:47:18.260730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.261717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:30.180 [2024-07-14 18:47:18.261737] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:30.180 [2024-07-14 18:47:18.261745] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.261757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.261866] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:30.180 [2024-07-14 18:47:18.261882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.261891] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:30.180 [2024-07-14 18:47:18.262723] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:30.180 [2024-07-14 18:47:18.263728] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:30.180 [2024-07-14 18:47:18.264732] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:30.180 [2024-07-14 18:47:18.265724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.180 [2024-07-14 18:47:18.265809] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:30.180 [2024-07-14 18:47:18.266743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:30.180 [2024-07-14 18:47:18.266762] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:30.180 [2024-07-14 18:47:18.266771] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.266795] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:30.180 [2024-07-14 18:47:18.266807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.266827] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:30.180 [2024-07-14 18:47:18.266836] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:30.180 [2024-07-14 18:47:18.266868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.274892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.274915] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:30.180 [2024-07-14 18:47:18.274928] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:30.180 [2024-07-14 18:47:18.274937] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:30.180 [2024-07-14 18:47:18.274944] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:30.180 [2024-07-14 18:47:18.274952] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:30.180 [2024-07-14 18:47:18.274960] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:30.180 [2024-07-14 18:47:18.274968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.274981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.274996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.282888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.282917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.180 [2024-07-14 18:47:18.282932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.180 [2024-07-14 18:47:18.282944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.180 [2024-07-14 18:47:18.282959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.180 [2024-07-14 18:47:18.282969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.282984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.282999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.290885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.290903] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:30.180 [2024-07-14 18:47:18.290912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.290923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.290933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.290946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.298887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.298958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.298973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.298986] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:30.180 [2024-07-14 18:47:18.298994] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:30.180 [2024-07-14 18:47:18.299004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.306889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.306911] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:30.180 [2024-07-14 18:47:18.306932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.306946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.306958] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:30.180 [2024-07-14 18:47:18.306966] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:30.180 [2024-07-14 18:47:18.306976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.314886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.314913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.314929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.314946] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:30.180 [2024-07-14 18:47:18.314955] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:30.180 [2024-07-14 18:47:18.314964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.322886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.322907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322967] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:30.180 [2024-07-14 18:47:18.322974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:30.180 [2024-07-14 18:47:18.322982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:30.180 [2024-07-14 18:47:18.323007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.330886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:30.180 [2024-07-14 18:47:18.330912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:30.180 [2024-07-14 18:47:18.338889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.338915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:30.181 [2024-07-14 18:47:18.346889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.346913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:30.181 [2024-07-14 18:47:18.354898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.354929] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:30.181 [2024-07-14 18:47:18.354940] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:30.181 [2024-07-14 18:47:18.354946] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:30.181 [2024-07-14 18:47:18.354952] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:30.181 [2024-07-14 18:47:18.354962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:30.181 [2024-07-14 18:47:18.354979] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:30.181 [2024-07-14 18:47:18.354988] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:30.181 [2024-07-14 18:47:18.354997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:30.181 [2024-07-14 18:47:18.355008] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:30.181 [2024-07-14 18:47:18.355015] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:30.181 [2024-07-14 18:47:18.355024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:30.181 [2024-07-14 18:47:18.355035] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:30.181 [2024-07-14 18:47:18.355043] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:30.181 [2024-07-14 18:47:18.355052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:30.181 [2024-07-14 18:47:18.362889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.362917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.362934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:30.181 [2024-07-14 18:47:18.362946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:30.181 ===================================================== 00:15:30.181 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.181 ===================================================== 00:15:30.181 Controller Capabilities/Features 00:15:30.181 ================================ 00:15:30.181 Vendor ID: 4e58 00:15:30.181 Subsystem Vendor ID: 4e58 00:15:30.181 Serial Number: SPDK2 00:15:30.181 Model Number: SPDK bdev Controller 00:15:30.181 Firmware Version: 24.09 00:15:30.181 Recommended Arb Burst: 6 00:15:30.181 IEEE OUI Identifier: 8d 6b 50 00:15:30.181 Multi-path I/O 00:15:30.181 May have multiple subsystem ports: Yes 00:15:30.181 May have multiple controllers: Yes 00:15:30.181 Associated with SR-IOV VF: No 00:15:30.181 Max Data Transfer Size: 131072 00:15:30.181 Max Number of Namespaces: 32 00:15:30.181 Max Number of I/O Queues: 127 00:15:30.181 NVMe Specification Version (VS): 1.3 00:15:30.181 NVMe Specification Version (Identify): 1.3 00:15:30.181 Maximum Queue Entries: 256 00:15:30.181 Contiguous Queues Required: Yes 00:15:30.181 Arbitration Mechanisms Supported 00:15:30.181 Weighted Round Robin: Not Supported 00:15:30.181 Vendor Specific: Not Supported 00:15:30.181 Reset Timeout: 15000 ms 00:15:30.181 Doorbell Stride: 4 bytes 00:15:30.181 NVM Subsystem Reset: Not Supported 00:15:30.181 Command Sets Supported 00:15:30.181 NVM Command Set: Supported 00:15:30.181 Boot Partition: Not Supported 00:15:30.181 Memory Page Size Minimum: 4096 bytes 00:15:30.181 Memory Page Size Maximum: 4096 bytes 00:15:30.181 Persistent Memory Region: Not Supported 00:15:30.181 Optional Asynchronous Events Supported 00:15:30.181 Namespace Attribute Notices: Supported 00:15:30.181 Firmware Activation Notices: Not Supported 00:15:30.181 ANA Change Notices: Not Supported 00:15:30.181 PLE Aggregate Log Change Notices: Not Supported 00:15:30.181 LBA Status Info Alert Notices: Not Supported 00:15:30.181 EGE Aggregate Log Change Notices: Not Supported 00:15:30.181 Normal NVM Subsystem Shutdown event: Not Supported 00:15:30.181 Zone Descriptor Change Notices: Not Supported 00:15:30.181 Discovery Log Change Notices: Not Supported 00:15:30.181 Controller Attributes 00:15:30.181 128-bit Host Identifier: Supported 00:15:30.181 Non-Operational Permissive Mode: Not Supported 00:15:30.181 NVM Sets: Not Supported 00:15:30.181 Read Recovery Levels: Not Supported 00:15:30.181 Endurance Groups: Not Supported 00:15:30.181 Predictable Latency Mode: Not Supported 00:15:30.181 Traffic Based Keep ALive: Not Supported 00:15:30.181 Namespace Granularity: Not Supported 00:15:30.181 SQ Associations: Not Supported 00:15:30.181 UUID List: Not Supported 00:15:30.181 Multi-Domain Subsystem: Not Supported 00:15:30.181 Fixed Capacity Management: Not Supported 00:15:30.181 Variable Capacity Management: Not Supported 00:15:30.181 Delete Endurance Group: Not Supported 00:15:30.181 Delete NVM Set: Not Supported 00:15:30.181 Extended LBA Formats Supported: Not Supported 00:15:30.181 Flexible Data Placement Supported: Not Supported 00:15:30.181 00:15:30.181 Controller Memory Buffer Support 00:15:30.181 ================================ 00:15:30.181 Supported: No 00:15:30.181 00:15:30.181 Persistent Memory Region Support 00:15:30.181 ================================ 00:15:30.181 Supported: No 00:15:30.181 00:15:30.181 Admin Command Set Attributes 00:15:30.181 ============================ 00:15:30.181 Security Send/Receive: Not Supported 00:15:30.181 Format NVM: Not Supported 00:15:30.181 Firmware Activate/Download: Not Supported 00:15:30.181 Namespace Management: Not Supported 00:15:30.181 Device Self-Test: Not Supported 00:15:30.181 Directives: Not Supported 00:15:30.181 NVMe-MI: Not Supported 00:15:30.181 Virtualization Management: Not Supported 00:15:30.181 Doorbell Buffer Config: Not Supported 00:15:30.181 Get LBA Status Capability: Not Supported 00:15:30.181 Command & Feature Lockdown Capability: Not Supported 00:15:30.181 Abort Command Limit: 4 00:15:30.181 Async Event Request Limit: 4 00:15:30.181 Number of Firmware Slots: N/A 00:15:30.181 Firmware Slot 1 Read-Only: N/A 00:15:30.181 Firmware Activation Without Reset: N/A 00:15:30.181 Multiple Update Detection Support: N/A 00:15:30.181 Firmware Update Granularity: No Information Provided 00:15:30.181 Per-Namespace SMART Log: No 00:15:30.181 Asymmetric Namespace Access Log Page: Not Supported 00:15:30.181 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:30.181 Command Effects Log Page: Supported 00:15:30.181 Get Log Page Extended Data: Supported 00:15:30.181 Telemetry Log Pages: Not Supported 00:15:30.181 Persistent Event Log Pages: Not Supported 00:15:30.181 Supported Log Pages Log Page: May Support 00:15:30.181 Commands Supported & Effects Log Page: Not Supported 00:15:30.181 Feature Identifiers & Effects Log Page:May Support 00:15:30.181 NVMe-MI Commands & Effects Log Page: May Support 00:15:30.181 Data Area 4 for Telemetry Log: Not Supported 00:15:30.181 Error Log Page Entries Supported: 128 00:15:30.181 Keep Alive: Supported 00:15:30.181 Keep Alive Granularity: 10000 ms 00:15:30.181 00:15:30.181 NVM Command Set Attributes 00:15:30.181 ========================== 00:15:30.181 Submission Queue Entry Size 00:15:30.181 Max: 64 00:15:30.181 Min: 64 00:15:30.181 Completion Queue Entry Size 00:15:30.181 Max: 16 00:15:30.181 Min: 16 00:15:30.181 Number of Namespaces: 32 00:15:30.181 Compare Command: Supported 00:15:30.181 Write Uncorrectable Command: Not Supported 00:15:30.181 Dataset Management Command: Supported 00:15:30.181 Write Zeroes Command: Supported 00:15:30.181 Set Features Save Field: Not Supported 00:15:30.181 Reservations: Not Supported 00:15:30.181 Timestamp: Not Supported 00:15:30.181 Copy: Supported 00:15:30.181 Volatile Write Cache: Present 00:15:30.181 Atomic Write Unit (Normal): 1 00:15:30.181 Atomic Write Unit (PFail): 1 00:15:30.181 Atomic Compare & Write Unit: 1 00:15:30.181 Fused Compare & Write: Supported 00:15:30.181 Scatter-Gather List 00:15:30.181 SGL Command Set: Supported (Dword aligned) 00:15:30.181 SGL Keyed: Not Supported 00:15:30.181 SGL Bit Bucket Descriptor: Not Supported 00:15:30.181 SGL Metadata Pointer: Not Supported 00:15:30.181 Oversized SGL: Not Supported 00:15:30.181 SGL Metadata Address: Not Supported 00:15:30.181 SGL Offset: Not Supported 00:15:30.181 Transport SGL Data Block: Not Supported 00:15:30.181 Replay Protected Memory Block: Not Supported 00:15:30.181 00:15:30.181 Firmware Slot Information 00:15:30.181 ========================= 00:15:30.181 Active slot: 1 00:15:30.181 Slot 1 Firmware Revision: 24.09 00:15:30.181 00:15:30.181 00:15:30.181 Commands Supported and Effects 00:15:30.181 ============================== 00:15:30.181 Admin Commands 00:15:30.181 -------------- 00:15:30.181 Get Log Page (02h): Supported 00:15:30.181 Identify (06h): Supported 00:15:30.181 Abort (08h): Supported 00:15:30.181 Set Features (09h): Supported 00:15:30.181 Get Features (0Ah): Supported 00:15:30.181 Asynchronous Event Request (0Ch): Supported 00:15:30.181 Keep Alive (18h): Supported 00:15:30.181 I/O Commands 00:15:30.181 ------------ 00:15:30.181 Flush (00h): Supported LBA-Change 00:15:30.181 Write (01h): Supported LBA-Change 00:15:30.181 Read (02h): Supported 00:15:30.182 Compare (05h): Supported 00:15:30.182 Write Zeroes (08h): Supported LBA-Change 00:15:30.182 Dataset Management (09h): Supported LBA-Change 00:15:30.182 Copy (19h): Supported LBA-Change 00:15:30.182 00:15:30.182 Error Log 00:15:30.182 ========= 00:15:30.182 00:15:30.182 Arbitration 00:15:30.182 =========== 00:15:30.182 Arbitration Burst: 1 00:15:30.182 00:15:30.182 Power Management 00:15:30.182 ================ 00:15:30.182 Number of Power States: 1 00:15:30.182 Current Power State: Power State #0 00:15:30.182 Power State #0: 00:15:30.182 Max Power: 0.00 W 00:15:30.182 Non-Operational State: Operational 00:15:30.182 Entry Latency: Not Reported 00:15:30.182 Exit Latency: Not Reported 00:15:30.182 Relative Read Throughput: 0 00:15:30.182 Relative Read Latency: 0 00:15:30.182 Relative Write Throughput: 0 00:15:30.182 Relative Write Latency: 0 00:15:30.182 Idle Power: Not Reported 00:15:30.182 Active Power: Not Reported 00:15:30.182 Non-Operational Permissive Mode: Not Supported 00:15:30.182 00:15:30.182 Health Information 00:15:30.182 ================== 00:15:30.182 Critical Warnings: 00:15:30.182 Available Spare Space: OK 00:15:30.182 Temperature: OK 00:15:30.182 Device Reliability: OK 00:15:30.182 Read Only: No 00:15:30.182 Volatile Memory Backup: OK 00:15:30.182 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:30.182 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:30.182 Available Spare: 0% 00:15:30.182 Available Sp[2024-07-14 18:47:18.363076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:30.182 [2024-07-14 18:47:18.370887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:30.182 [2024-07-14 18:47:18.370936] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:30.182 [2024-07-14 18:47:18.370954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.182 [2024-07-14 18:47:18.370965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.182 [2024-07-14 18:47:18.370975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.182 [2024-07-14 18:47:18.370984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.182 [2024-07-14 18:47:18.371065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:30.182 [2024-07-14 18:47:18.371085] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:30.182 [2024-07-14 18:47:18.372067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.182 [2024-07-14 18:47:18.372139] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:30.182 [2024-07-14 18:47:18.372155] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:30.182 [2024-07-14 18:47:18.373083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:30.182 [2024-07-14 18:47:18.373107] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:30.182 [2024-07-14 18:47:18.373165] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:30.182 [2024-07-14 18:47:18.374356] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:30.440 are Threshold: 0% 00:15:30.440 Life Percentage Used: 0% 00:15:30.440 Data Units Read: 0 00:15:30.440 Data Units Written: 0 00:15:30.440 Host Read Commands: 0 00:15:30.440 Host Write Commands: 0 00:15:30.440 Controller Busy Time: 0 minutes 00:15:30.440 Power Cycles: 0 00:15:30.440 Power On Hours: 0 hours 00:15:30.440 Unsafe Shutdowns: 0 00:15:30.440 Unrecoverable Media Errors: 0 00:15:30.440 Lifetime Error Log Entries: 0 00:15:30.440 Warning Temperature Time: 0 minutes 00:15:30.440 Critical Temperature Time: 0 minutes 00:15:30.440 00:15:30.440 Number of Queues 00:15:30.440 ================ 00:15:30.440 Number of I/O Submission Queues: 127 00:15:30.440 Number of I/O Completion Queues: 127 00:15:30.440 00:15:30.440 Active Namespaces 00:15:30.440 ================= 00:15:30.440 Namespace ID:1 00:15:30.440 Error Recovery Timeout: Unlimited 00:15:30.440 Command Set Identifier: NVM (00h) 00:15:30.440 Deallocate: Supported 00:15:30.440 Deallocated/Unwritten Error: Not Supported 00:15:30.440 Deallocated Read Value: Unknown 00:15:30.440 Deallocate in Write Zeroes: Not Supported 00:15:30.440 Deallocated Guard Field: 0xFFFF 00:15:30.440 Flush: Supported 00:15:30.440 Reservation: Supported 00:15:30.440 Namespace Sharing Capabilities: Multiple Controllers 00:15:30.440 Size (in LBAs): 131072 (0GiB) 00:15:30.440 Capacity (in LBAs): 131072 (0GiB) 00:15:30.440 Utilization (in LBAs): 131072 (0GiB) 00:15:30.440 NGUID: 26C753863FA746AF8D43370E4B21ABCE 00:15:30.440 UUID: 26c75386-3fa7-46af-8d43-370e4b21abce 00:15:30.440 Thin Provisioning: Not Supported 00:15:30.440 Per-NS Atomic Units: Yes 00:15:30.440 Atomic Boundary Size (Normal): 0 00:15:30.440 Atomic Boundary Size (PFail): 0 00:15:30.440 Atomic Boundary Offset: 0 00:15:30.440 Maximum Single Source Range Length: 65535 00:15:30.440 Maximum Copy Length: 65535 00:15:30.440 Maximum Source Range Count: 1 00:15:30.440 NGUID/EUI64 Never Reused: No 00:15:30.440 Namespace Write Protected: No 00:15:30.440 Number of LBA Formats: 1 00:15:30.440 Current LBA Format: LBA Format #00 00:15:30.440 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:30.440 00:15:30.440 18:47:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:30.440 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.440 [2024-07-14 18:47:18.606500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.698 Initializing NVMe Controllers 00:15:35.698 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:35.698 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:35.698 Initialization complete. Launching workers. 00:15:35.698 ======================================================== 00:15:35.698 Latency(us) 00:15:35.698 Device Information : IOPS MiB/s Average min max 00:15:35.698 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35121.17 137.19 3644.00 1142.21 7522.28 00:15:35.698 ======================================================== 00:15:35.698 Total : 35121.17 137.19 3644.00 1142.21 7522.28 00:15:35.698 00:15:35.698 [2024-07-14 18:47:23.711251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.698 18:47:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:35.698 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.955 [2024-07-14 18:47:23.954943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.217 Initializing NVMe Controllers 00:15:41.217 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:41.217 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:41.217 Initialization complete. Launching workers. 00:15:41.217 ======================================================== 00:15:41.217 Latency(us) 00:15:41.217 Device Information : IOPS MiB/s Average min max 00:15:41.217 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32526.80 127.06 3936.91 1190.73 9920.76 00:15:41.217 ======================================================== 00:15:41.217 Total : 32526.80 127.06 3936.91 1190.73 9920.76 00:15:41.217 00:15:41.217 [2024-07-14 18:47:28.977032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.217 18:47:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:41.217 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.217 [2024-07-14 18:47:29.185206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:46.484 [2024-07-14 18:47:34.327031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:46.484 Initializing NVMe Controllers 00:15:46.484 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:46.484 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:46.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:46.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:46.484 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:46.484 Initialization complete. Launching workers. 00:15:46.484 Starting thread on core 2 00:15:46.484 Starting thread on core 3 00:15:46.484 Starting thread on core 1 00:15:46.484 18:47:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:46.484 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.484 [2024-07-14 18:47:34.639427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.670 [2024-07-14 18:47:38.365140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.670 Initializing NVMe Controllers 00:15:50.670 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.670 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.670 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:50.670 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:50.670 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:50.670 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:50.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:50.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:50.670 Initialization complete. Launching workers. 00:15:50.670 Starting thread on core 1 with urgent priority queue 00:15:50.670 Starting thread on core 2 with urgent priority queue 00:15:50.670 Starting thread on core 3 with urgent priority queue 00:15:50.670 Starting thread on core 0 with urgent priority queue 00:15:50.670 SPDK bdev Controller (SPDK2 ) core 0: 5723.67 IO/s 17.47 secs/100000 ios 00:15:50.670 SPDK bdev Controller (SPDK2 ) core 1: 5588.00 IO/s 17.90 secs/100000 ios 00:15:50.670 SPDK bdev Controller (SPDK2 ) core 2: 6026.00 IO/s 16.59 secs/100000 ios 00:15:50.670 SPDK bdev Controller (SPDK2 ) core 3: 5606.33 IO/s 17.84 secs/100000 ios 00:15:50.670 ======================================================== 00:15:50.670 00:15:50.670 18:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:50.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.670 [2024-07-14 18:47:38.665908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.670 Initializing NVMe Controllers 00:15:50.670 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.670 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.670 Namespace ID: 1 size: 0GB 00:15:50.670 Initialization complete. 00:15:50.670 INFO: using host memory buffer for IO 00:15:50.670 Hello world! 00:15:50.670 [2024-07-14 18:47:38.677991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.670 18:47:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:50.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.927 [2024-07-14 18:47:38.969219] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:51.857 Initializing NVMe Controllers 00:15:51.857 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.857 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:51.857 Initialization complete. Launching workers. 00:15:51.857 submit (in ns) avg, min, max = 9423.1, 3498.9, 4015048.9 00:15:51.857 complete (in ns) avg, min, max = 24674.2, 2064.4, 4090293.3 00:15:51.857 00:15:51.857 Submit histogram 00:15:51.857 ================ 00:15:51.857 Range in us Cumulative Count 00:15:51.857 3.484 - 3.508: 0.0304% ( 4) 00:15:51.857 3.508 - 3.532: 0.2962% ( 35) 00:15:51.857 3.532 - 3.556: 1.6631% ( 180) 00:15:51.857 3.556 - 3.579: 3.9869% ( 306) 00:15:51.857 3.579 - 3.603: 9.1738% ( 683) 00:15:51.857 3.603 - 3.627: 17.5881% ( 1108) 00:15:51.857 3.627 - 3.650: 26.4125% ( 1162) 00:15:51.857 3.650 - 3.674: 34.9787% ( 1128) 00:15:51.857 3.674 - 3.698: 42.0185% ( 927) 00:15:51.857 3.698 - 3.721: 48.5723% ( 863) 00:15:51.857 3.721 - 3.745: 53.7287% ( 679) 00:15:51.857 3.745 - 3.769: 58.1637% ( 584) 00:15:51.857 3.769 - 3.793: 61.9684% ( 501) 00:15:51.857 3.793 - 3.816: 65.5225% ( 468) 00:15:51.857 3.816 - 3.840: 68.9095% ( 446) 00:15:51.857 3.840 - 3.864: 73.3673% ( 587) 00:15:51.857 3.864 - 3.887: 77.4605% ( 539) 00:15:51.857 3.887 - 3.911: 81.1133% ( 481) 00:15:51.857 3.911 - 3.935: 84.2193% ( 409) 00:15:51.857 3.935 - 3.959: 86.4444% ( 293) 00:15:51.857 3.959 - 3.982: 88.3278% ( 248) 00:15:51.857 3.982 - 4.006: 89.8466% ( 200) 00:15:51.857 4.006 - 4.030: 91.2439% ( 184) 00:15:51.857 4.030 - 4.053: 92.3679% ( 148) 00:15:51.857 4.053 - 4.077: 93.4235% ( 139) 00:15:51.857 4.077 - 4.101: 94.3272% ( 119) 00:15:51.857 4.101 - 4.124: 94.8891% ( 74) 00:15:51.857 4.124 - 4.148: 95.4739% ( 77) 00:15:51.857 4.148 - 4.172: 95.9903% ( 68) 00:15:51.857 4.172 - 4.196: 96.2637% ( 36) 00:15:51.857 4.196 - 4.219: 96.4915% ( 30) 00:15:51.857 4.219 - 4.243: 96.6510% ( 21) 00:15:51.857 4.243 - 4.267: 96.7421% ( 12) 00:15:51.857 4.267 - 4.290: 96.8636% ( 16) 00:15:51.857 4.290 - 4.314: 96.9927% ( 17) 00:15:51.857 4.314 - 4.338: 97.0914% ( 13) 00:15:51.857 4.338 - 4.361: 97.1750% ( 11) 00:15:51.857 4.361 - 4.385: 97.2281% ( 7) 00:15:51.857 4.385 - 4.409: 97.2965% ( 9) 00:15:51.857 4.409 - 4.433: 97.3496% ( 7) 00:15:51.857 4.433 - 4.456: 97.3724% ( 3) 00:15:51.857 4.456 - 4.480: 97.3876% ( 2) 00:15:51.857 4.480 - 4.504: 97.4104% ( 3) 00:15:51.857 4.504 - 4.527: 97.4256% ( 2) 00:15:51.857 4.527 - 4.551: 97.4408% ( 2) 00:15:51.857 4.551 - 4.575: 97.4484% ( 1) 00:15:51.857 4.575 - 4.599: 97.4635% ( 2) 00:15:51.857 4.599 - 4.622: 97.4711% ( 1) 00:15:51.857 4.622 - 4.646: 97.4787% ( 1) 00:15:51.857 4.646 - 4.670: 97.4939% ( 2) 00:15:51.857 4.693 - 4.717: 97.5091% ( 2) 00:15:51.857 4.717 - 4.741: 97.5471% ( 5) 00:15:51.857 4.741 - 4.764: 97.5699% ( 3) 00:15:51.857 4.764 - 4.788: 97.6230% ( 7) 00:15:51.857 4.788 - 4.812: 97.6458% ( 3) 00:15:51.857 4.812 - 4.836: 97.7142% ( 9) 00:15:51.857 4.836 - 4.859: 97.7445% ( 4) 00:15:51.857 4.859 - 4.883: 97.8129% ( 9) 00:15:51.857 4.883 - 4.907: 97.8584% ( 6) 00:15:51.857 4.907 - 4.930: 97.8812% ( 3) 00:15:51.857 4.930 - 4.954: 97.9344% ( 7) 00:15:51.857 4.954 - 4.978: 97.9724% ( 5) 00:15:51.857 4.978 - 5.001: 98.0103% ( 5) 00:15:51.857 5.001 - 5.025: 98.0559% ( 6) 00:15:51.857 5.025 - 5.049: 98.0939% ( 5) 00:15:51.857 5.049 - 5.073: 98.1242% ( 4) 00:15:51.857 5.073 - 5.096: 98.1774% ( 7) 00:15:51.857 5.096 - 5.120: 98.1850% ( 1) 00:15:51.857 5.120 - 5.144: 98.2078% ( 3) 00:15:51.857 5.144 - 5.167: 98.2154% ( 1) 00:15:51.857 5.167 - 5.191: 98.2382% ( 3) 00:15:51.857 5.215 - 5.239: 98.2457% ( 1) 00:15:51.857 5.239 - 5.262: 98.2533% ( 1) 00:15:51.857 5.262 - 5.286: 98.2609% ( 1) 00:15:51.857 5.310 - 5.333: 98.2761% ( 2) 00:15:51.857 5.381 - 5.404: 98.2837% ( 1) 00:15:51.857 5.428 - 5.452: 98.2989% ( 2) 00:15:51.857 5.665 - 5.689: 98.3065% ( 1) 00:15:51.857 5.831 - 5.855: 98.3141% ( 1) 00:15:51.857 5.973 - 5.997: 98.3217% ( 1) 00:15:51.857 6.400 - 6.447: 98.3293% ( 1) 00:15:51.857 6.447 - 6.495: 98.3445% ( 2) 00:15:51.857 6.590 - 6.637: 98.3521% ( 1) 00:15:51.857 6.732 - 6.779: 98.3597% ( 1) 00:15:51.857 6.874 - 6.921: 98.3673% ( 1) 00:15:51.857 6.921 - 6.969: 98.3748% ( 1) 00:15:51.857 7.206 - 7.253: 98.3900% ( 2) 00:15:51.857 7.253 - 7.301: 98.4204% ( 4) 00:15:51.857 7.396 - 7.443: 98.4356% ( 2) 00:15:51.857 7.633 - 7.680: 98.4508% ( 2) 00:15:51.857 7.680 - 7.727: 98.4660% ( 2) 00:15:51.857 7.822 - 7.870: 98.4736% ( 1) 00:15:51.857 7.870 - 7.917: 98.4888% ( 2) 00:15:51.857 7.917 - 7.964: 98.4964% ( 1) 00:15:51.857 7.964 - 8.012: 98.5039% ( 1) 00:15:51.857 8.012 - 8.059: 98.5115% ( 1) 00:15:51.857 8.059 - 8.107: 98.5191% ( 1) 00:15:51.857 8.154 - 8.201: 98.5267% ( 1) 00:15:51.857 8.201 - 8.249: 98.5419% ( 2) 00:15:51.857 8.249 - 8.296: 98.5495% ( 1) 00:15:51.857 8.296 - 8.344: 98.5647% ( 2) 00:15:51.857 8.391 - 8.439: 98.5799% ( 2) 00:15:51.857 8.486 - 8.533: 98.5875% ( 1) 00:15:51.857 8.865 - 8.913: 98.5951% ( 1) 00:15:51.857 9.007 - 9.055: 98.6103% ( 2) 00:15:51.857 9.102 - 9.150: 98.6179% ( 1) 00:15:51.857 9.197 - 9.244: 98.6255% ( 1) 00:15:51.857 9.244 - 9.292: 98.6330% ( 1) 00:15:51.857 9.339 - 9.387: 98.6406% ( 1) 00:15:51.857 9.434 - 9.481: 98.6482% ( 1) 00:15:51.857 9.576 - 9.624: 98.6558% ( 1) 00:15:51.857 9.624 - 9.671: 98.6634% ( 1) 00:15:51.857 9.719 - 9.766: 98.6710% ( 1) 00:15:51.857 9.861 - 9.908: 98.6786% ( 1) 00:15:51.857 10.477 - 10.524: 98.6862% ( 1) 00:15:51.857 10.524 - 10.572: 98.6938% ( 1) 00:15:51.857 10.714 - 10.761: 98.7014% ( 1) 00:15:51.857 10.761 - 10.809: 98.7090% ( 1) 00:15:51.857 10.809 - 10.856: 98.7242% ( 2) 00:15:51.857 10.999 - 11.046: 98.7318% ( 1) 00:15:51.857 11.236 - 11.283: 98.7470% ( 2) 00:15:51.857 11.283 - 11.330: 98.7546% ( 1) 00:15:51.857 11.330 - 11.378: 98.7622% ( 1) 00:15:51.857 11.378 - 11.425: 98.7697% ( 1) 00:15:51.857 11.473 - 11.520: 98.7773% ( 1) 00:15:51.857 11.567 - 11.615: 98.7849% ( 1) 00:15:51.857 11.710 - 11.757: 98.7925% ( 1) 00:15:51.857 12.136 - 12.231: 98.8001% ( 1) 00:15:51.857 12.231 - 12.326: 98.8077% ( 1) 00:15:51.857 12.516 - 12.610: 98.8153% ( 1) 00:15:51.857 12.610 - 12.705: 98.8305% ( 2) 00:15:51.857 13.084 - 13.179: 98.8381% ( 1) 00:15:51.857 13.464 - 13.559: 98.8457% ( 1) 00:15:51.857 14.412 - 14.507: 98.8533% ( 1) 00:15:51.857 14.507 - 14.601: 98.8685% ( 2) 00:15:51.857 14.886 - 14.981: 98.8761% ( 1) 00:15:51.857 15.170 - 15.265: 98.8837% ( 1) 00:15:51.857 16.498 - 16.593: 98.8913% ( 1) 00:15:51.857 17.067 - 17.161: 98.8988% ( 1) 00:15:51.857 17.256 - 17.351: 98.9064% ( 1) 00:15:51.857 17.351 - 17.446: 98.9368% ( 4) 00:15:51.857 17.446 - 17.541: 98.9748% ( 5) 00:15:51.857 17.541 - 17.636: 98.9900% ( 2) 00:15:51.857 17.636 - 17.730: 99.0204% ( 4) 00:15:51.858 17.730 - 17.825: 99.0355% ( 2) 00:15:51.858 17.825 - 17.920: 99.0659% ( 4) 00:15:51.858 17.920 - 18.015: 99.1191% ( 7) 00:15:51.858 18.015 - 18.110: 99.1722% ( 7) 00:15:51.858 18.110 - 18.204: 99.2786% ( 14) 00:15:51.858 18.204 - 18.299: 99.3393% ( 8) 00:15:51.858 18.299 - 18.394: 99.4152% ( 10) 00:15:51.858 18.394 - 18.489: 99.5368% ( 16) 00:15:51.858 18.489 - 18.584: 99.5519% ( 2) 00:15:51.858 18.584 - 18.679: 99.5747% ( 3) 00:15:51.858 18.679 - 18.773: 99.6127% ( 5) 00:15:51.858 18.773 - 18.868: 99.6355% ( 3) 00:15:51.858 18.868 - 18.963: 99.6583% ( 3) 00:15:51.858 18.963 - 19.058: 99.6810% ( 3) 00:15:51.858 19.058 - 19.153: 99.6962% ( 2) 00:15:51.858 19.153 - 19.247: 99.7190% ( 3) 00:15:51.858 19.247 - 19.342: 99.7266% ( 1) 00:15:51.858 19.342 - 19.437: 99.7342% ( 1) 00:15:51.858 19.437 - 19.532: 99.7646% ( 4) 00:15:51.858 21.807 - 21.902: 99.7722% ( 1) 00:15:51.858 21.997 - 22.092: 99.7798% ( 1) 00:15:51.858 22.092 - 22.187: 99.7874% ( 1) 00:15:51.858 22.187 - 22.281: 99.7950% ( 1) 00:15:51.858 22.281 - 22.376: 99.8026% ( 1) 00:15:51.858 23.799 - 23.893: 99.8101% ( 1) 00:15:51.858 25.790 - 25.979: 99.8253% ( 2) 00:15:51.858 25.979 - 26.169: 99.8329% ( 1) 00:15:51.858 26.738 - 26.927: 99.8405% ( 1) 00:15:51.858 29.772 - 29.961: 99.8481% ( 1) 00:15:51.858 31.479 - 31.668: 99.8557% ( 1) 00:15:51.858 34.702 - 34.892: 99.8633% ( 1) 00:15:51.858 3980.705 - 4004.978: 99.9848% ( 16) 00:15:51.858 4004.978 - 4029.250: 100.0000% ( 2) 00:15:51.858 00:15:51.858 Complete histogram 00:15:51.858 ================== 00:15:51.858 Range in us Cumulative Count 00:15:51.858 2.062 - 2.074: 12.1279% ( 1597) 00:15:51.858 2.074 - 2.086: 44.0538% ( 4204) 00:15:51.858 2.086 - 2.098: 47.1294% ( 405) 00:15:51.858 2.098 - 2.110: 53.5465% ( 845) 00:15:51.858 2.110 - 2.121: 60.0091% ( 851) 00:15:51.858 2.121 - 2.133: 61.8089% ( 237) 00:15:51.858 2.133 - 2.145: 69.3348% ( 991) 00:15:51.858 2.145 - 2.157: 76.0480% ( 884) 00:15:51.858 2.157 - 2.169: 76.7998% ( 99) 00:15:51.858 2.169 - 2.181: 79.3742% ( 339) 00:15:51.858 2.181 - 2.193: 81.3867% ( 265) 00:15:51.858 2.193 - 2.204: 82.0854% ( 92) 00:15:51.858 2.204 - 2.216: 85.2217% ( 413) 00:15:51.858 2.216 - 2.228: 88.9809% ( 495) 00:15:51.858 2.228 - 2.240: 90.8566% ( 247) 00:15:51.858 2.240 - 2.252: 92.4970% ( 216) 00:15:51.858 2.252 - 2.264: 93.6741% ( 155) 00:15:51.858 2.264 - 2.276: 93.9930% ( 42) 00:15:51.858 2.276 - 2.287: 94.3651% ( 49) 00:15:51.858 2.287 - 2.299: 94.7296% ( 48) 00:15:51.858 2.299 - 2.311: 95.3144% ( 77) 00:15:51.858 2.311 - 2.323: 95.5878% ( 36) 00:15:51.858 2.323 - 2.335: 95.7017% ( 15) 00:15:51.858 2.335 - 2.347: 95.8004% ( 13) 00:15:51.858 2.347 - 2.359: 95.9827% ( 24) 00:15:51.858 2.359 - 2.370: 96.3168% ( 44) 00:15:51.858 2.370 - 2.382: 96.6738% ( 47) 00:15:51.858 2.382 - 2.394: 97.1750% ( 66) 00:15:51.858 2.394 - 2.406: 97.5015% ( 43) 00:15:51.858 2.406 - 2.418: 97.6078% ( 14) 00:15:51.858 2.418 - 2.430: 97.6762% ( 9) 00:15:51.858 2.430 - 2.441: 97.8281% ( 20) 00:15:51.858 2.441 - 2.453: 97.9420% ( 15) 00:15:51.858 2.453 - 2.465: 98.0939% ( 20) 00:15:51.858 2.465 - 2.477: 98.1926% ( 13) 00:15:51.858 2.477 - 2.489: 98.2382% ( 6) 00:15:51.858 2.489 - 2.501: 98.2989% ( 8) 00:15:51.858 2.501 - 2.513: 98.3521% ( 7) 00:15:51.858 2.513 - 2.524: 98.3748% ( 3) 00:15:51.858 2.524 - 2.536: 98.3900% ( 2) 00:15:51.858 2.536 - 2.548: 98.3976% ( 1) 00:15:51.858 2.548 - 2.560: 98.4280% ( 4) 00:15:51.858 2.560 - 2.572: 98.4356% ( 1) 00:15:51.858 2.631 - 2.643: 98.4508% ( 2) 00:15:51.858 2.655 - 2.667: 98.4584% ( 1) 00:15:51.858 2.679 - 2.690: 98.4736% ( 2) 00:15:51.858 2.714 - 2.726: 98.4888% ( 2) 00:15:51.858 2.738 - 2.750: 98.4964% ( 1) 00:15:51.858 2.761 - 2.773: 98.5039% ( 1) 00:15:51.858 2.773 - 2.785: 98.5115% ( 1) 00:15:51.858 3.010 - 3.022: 98.5191% ( 1) 00:15:51.858 3.342 - 3.366: 98.5343% ( 2) 00:15:51.858 3.366 - 3.390: 98.5647% ( 4) 00:15:51.858 3.390 - 3.413: 98.5875% ( 3) 00:15:51.858 3.413 - 3.437: 98.6103% ( 3) 00:15:51.858 3.437 - 3.461: 98.6255% ( 2) 00:15:51.858 3.461 - 3.484: 98.6330% ( 1) 00:15:51.858 3.532 - 3.556: 98.6482% ( 2) 00:15:51.858 3.556 - 3.579: 98.6634% ( 2) 00:15:51.858 3.603 - 3.627: 98.6710% ( 1) 00:15:51.858 3.674 - 3.698: 98.6786% ( 1) 00:15:51.858 3.721 - 3.745: 98.6938% ( 2) 00:15:51.858 3.816 - 3.840: 98.7014% ( 1) 00:15:51.858 4.006 - 4.030: 98.7090% ( 1) 00:15:51.858 4.053 - 4.077: 98.7166% ( 1) 00:15:51.858 4.551 - 4.575: 98.7242% ( 1) 00:15:51.858 4.930 - 4.954: 98.7318% ( 1) 00:15:51.858 5.191 - 5.215: 98.7394% ( 1) 00:15:51.858 5.286 - 5.310: 98.7470% ( 1) 00:15:51.858 5.381 - 5.404: 98.7622% ( 2) 00:15:51.858 6.021 - 6.044: 98.7773% ( 2) 00:15:51.858 6.044 - 6.068: 98.7849% ( 1) 00:15:51.858 6.258 - 6.305: 98.7925% ( 1) 00:15:51.858 6.305 - 6.353: 98.8153% ( 3) 00:15:51.858 6.447 - 6.495: 9[2024-07-14 18:47:40.057657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.115 8.8229% ( 1) 00:15:52.115 7.538 - 7.585: 98.8305% ( 1) 00:15:52.115 8.154 - 8.201: 98.8381% ( 1) 00:15:52.115 15.455 - 15.550: 98.8457% ( 1) 00:15:52.115 15.644 - 15.739: 98.8761% ( 4) 00:15:52.115 15.739 - 15.834: 98.8837% ( 1) 00:15:52.115 15.834 - 15.929: 98.8913% ( 1) 00:15:52.115 15.929 - 16.024: 98.9064% ( 2) 00:15:52.115 16.024 - 16.119: 98.9292% ( 3) 00:15:52.115 16.119 - 16.213: 98.9748% ( 6) 00:15:52.115 16.213 - 16.308: 99.0052% ( 4) 00:15:52.115 16.308 - 16.403: 99.0431% ( 5) 00:15:52.115 16.403 - 16.498: 99.1191% ( 10) 00:15:52.115 16.498 - 16.593: 99.1722% ( 7) 00:15:52.115 16.593 - 16.687: 99.2178% ( 6) 00:15:52.115 16.687 - 16.782: 99.2558% ( 5) 00:15:52.115 16.782 - 16.877: 99.2937% ( 5) 00:15:52.115 16.877 - 16.972: 99.3317% ( 5) 00:15:52.115 16.972 - 17.067: 99.3469% ( 2) 00:15:52.115 17.067 - 17.161: 99.3545% ( 1) 00:15:52.115 17.161 - 17.256: 99.3621% ( 1) 00:15:52.115 17.256 - 17.351: 99.3773% ( 2) 00:15:52.115 17.446 - 17.541: 99.3849% ( 1) 00:15:52.115 17.541 - 17.636: 99.4001% ( 2) 00:15:52.115 18.015 - 18.110: 99.4077% ( 1) 00:15:52.115 18.394 - 18.489: 99.4152% ( 1) 00:15:52.115 18.489 - 18.584: 99.4228% ( 1) 00:15:52.115 18.963 - 19.058: 99.4304% ( 1) 00:15:52.115 29.961 - 30.151: 99.4380% ( 1) 00:15:52.115 3470.981 - 3495.253: 99.4456% ( 1) 00:15:52.115 3980.705 - 4004.978: 99.8709% ( 56) 00:15:52.115 4004.978 - 4029.250: 99.9924% ( 16) 00:15:52.115 4077.796 - 4102.068: 100.0000% ( 1) 00:15:52.115 00:15:52.115 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:52.115 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:52.115 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:52.115 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:52.115 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.372 [ 00:15:52.372 { 00:15:52.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.372 "subtype": "Discovery", 00:15:52.372 "listen_addresses": [], 00:15:52.372 "allow_any_host": true, 00:15:52.372 "hosts": [] 00:15:52.372 }, 00:15:52.372 { 00:15:52.372 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:52.372 "subtype": "NVMe", 00:15:52.372 "listen_addresses": [ 00:15:52.372 { 00:15:52.372 "trtype": "VFIOUSER", 00:15:52.372 "adrfam": "IPv4", 00:15:52.372 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:52.372 "trsvcid": "0" 00:15:52.372 } 00:15:52.372 ], 00:15:52.372 "allow_any_host": true, 00:15:52.372 "hosts": [], 00:15:52.372 "serial_number": "SPDK1", 00:15:52.372 "model_number": "SPDK bdev Controller", 00:15:52.372 "max_namespaces": 32, 00:15:52.372 "min_cntlid": 1, 00:15:52.372 "max_cntlid": 65519, 00:15:52.372 "namespaces": [ 00:15:52.372 { 00:15:52.372 "nsid": 1, 00:15:52.372 "bdev_name": "Malloc1", 00:15:52.372 "name": "Malloc1", 00:15:52.372 "nguid": "A790C0ED38434A7FB8EAF293897C84E6", 00:15:52.372 "uuid": "a790c0ed-3843-4a7f-b8ea-f293897c84e6" 00:15:52.372 }, 00:15:52.372 { 00:15:52.372 "nsid": 2, 00:15:52.372 "bdev_name": "Malloc3", 00:15:52.372 "name": "Malloc3", 00:15:52.372 "nguid": "80AAD45ADE87438CB5529BC4D14A112A", 00:15:52.372 "uuid": "80aad45a-de87-438c-b552-9bc4d14a112a" 00:15:52.372 } 00:15:52.372 ] 00:15:52.372 }, 00:15:52.372 { 00:15:52.372 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:52.372 "subtype": "NVMe", 00:15:52.372 "listen_addresses": [ 00:15:52.372 { 00:15:52.372 "trtype": "VFIOUSER", 00:15:52.372 "adrfam": "IPv4", 00:15:52.372 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:52.372 "trsvcid": "0" 00:15:52.372 } 00:15:52.372 ], 00:15:52.372 "allow_any_host": true, 00:15:52.372 "hosts": [], 00:15:52.372 "serial_number": "SPDK2", 00:15:52.372 "model_number": "SPDK bdev Controller", 00:15:52.372 "max_namespaces": 32, 00:15:52.372 "min_cntlid": 1, 00:15:52.372 "max_cntlid": 65519, 00:15:52.372 "namespaces": [ 00:15:52.372 { 00:15:52.372 "nsid": 1, 00:15:52.372 "bdev_name": "Malloc2", 00:15:52.372 "name": "Malloc2", 00:15:52.372 "nguid": "26C753863FA746AF8D43370E4B21ABCE", 00:15:52.372 "uuid": "26c75386-3fa7-46af-8d43-370e4b21abce" 00:15:52.372 } 00:15:52.372 ] 00:15:52.372 } 00:15:52.372 ] 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3563303 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:52.372 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:52.372 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.372 [2024-07-14 18:47:40.560374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:52.629 Malloc4 00:15:52.629 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:52.886 [2024-07-14 18:47:40.925999] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:52.886 18:47:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.886 Asynchronous Event Request test 00:15:52.886 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.886 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:52.886 Registering asynchronous event callbacks... 00:15:52.886 Starting namespace attribute notice tests for all controllers... 00:15:52.886 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:52.886 aer_cb - Changed Namespace 00:15:52.886 Cleaning up... 00:15:53.144 [ 00:15:53.144 { 00:15:53.144 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:53.144 "subtype": "Discovery", 00:15:53.144 "listen_addresses": [], 00:15:53.144 "allow_any_host": true, 00:15:53.144 "hosts": [] 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:53.144 "subtype": "NVMe", 00:15:53.144 "listen_addresses": [ 00:15:53.144 { 00:15:53.144 "trtype": "VFIOUSER", 00:15:53.144 "adrfam": "IPv4", 00:15:53.145 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:53.145 "trsvcid": "0" 00:15:53.145 } 00:15:53.145 ], 00:15:53.145 "allow_any_host": true, 00:15:53.145 "hosts": [], 00:15:53.145 "serial_number": "SPDK1", 00:15:53.145 "model_number": "SPDK bdev Controller", 00:15:53.145 "max_namespaces": 32, 00:15:53.145 "min_cntlid": 1, 00:15:53.145 "max_cntlid": 65519, 00:15:53.145 "namespaces": [ 00:15:53.145 { 00:15:53.145 "nsid": 1, 00:15:53.145 "bdev_name": "Malloc1", 00:15:53.145 "name": "Malloc1", 00:15:53.145 "nguid": "A790C0ED38434A7FB8EAF293897C84E6", 00:15:53.145 "uuid": "a790c0ed-3843-4a7f-b8ea-f293897c84e6" 00:15:53.145 }, 00:15:53.145 { 00:15:53.145 "nsid": 2, 00:15:53.145 "bdev_name": "Malloc3", 00:15:53.145 "name": "Malloc3", 00:15:53.145 "nguid": "80AAD45ADE87438CB5529BC4D14A112A", 00:15:53.145 "uuid": "80aad45a-de87-438c-b552-9bc4d14a112a" 00:15:53.145 } 00:15:53.145 ] 00:15:53.145 }, 00:15:53.145 { 00:15:53.145 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:53.145 "subtype": "NVMe", 00:15:53.145 "listen_addresses": [ 00:15:53.145 { 00:15:53.145 "trtype": "VFIOUSER", 00:15:53.145 "adrfam": "IPv4", 00:15:53.145 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:53.145 "trsvcid": "0" 00:15:53.145 } 00:15:53.145 ], 00:15:53.145 "allow_any_host": true, 00:15:53.145 "hosts": [], 00:15:53.145 "serial_number": "SPDK2", 00:15:53.145 "model_number": "SPDK bdev Controller", 00:15:53.145 "max_namespaces": 32, 00:15:53.145 "min_cntlid": 1, 00:15:53.145 "max_cntlid": 65519, 00:15:53.145 "namespaces": [ 00:15:53.145 { 00:15:53.145 "nsid": 1, 00:15:53.145 "bdev_name": "Malloc2", 00:15:53.145 "name": "Malloc2", 00:15:53.145 "nguid": "26C753863FA746AF8D43370E4B21ABCE", 00:15:53.145 "uuid": "26c75386-3fa7-46af-8d43-370e4b21abce" 00:15:53.145 }, 00:15:53.145 { 00:15:53.145 "nsid": 2, 00:15:53.145 "bdev_name": "Malloc4", 00:15:53.145 "name": "Malloc4", 00:15:53.145 "nguid": "ACB01828F0AF499EA3024025FEA4B9A5", 00:15:53.145 "uuid": "acb01828-f0af-499e-a302-4025fea4b9a5" 00:15:53.145 } 00:15:53.145 ] 00:15:53.145 } 00:15:53.145 ] 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3563303 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3557703 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3557703 ']' 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3557703 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3557703 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3557703' 00:15:53.145 killing process with pid 3557703 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3557703 00:15:53.145 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3557703 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3563445 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3563445' 00:15:53.413 Process pid: 3563445 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3563445 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3563445 ']' 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.413 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:53.413 [2024-07-14 18:47:41.575964] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:53.413 [2024-07-14 18:47:41.577005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:53.413 [2024-07-14 18:47:41.577063] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.413 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.721 [2024-07-14 18:47:41.641314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.721 [2024-07-14 18:47:41.730446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.721 [2024-07-14 18:47:41.730509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.721 [2024-07-14 18:47:41.730526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.721 [2024-07-14 18:47:41.730540] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.721 [2024-07-14 18:47:41.730551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.721 [2024-07-14 18:47:41.730641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.721 [2024-07-14 18:47:41.730711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.721 [2024-07-14 18:47:41.730803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.721 [2024-07-14 18:47:41.730805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.721 [2024-07-14 18:47:41.831396] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:53.721 [2024-07-14 18:47:41.831699] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:53.721 [2024-07-14 18:47:41.831991] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:53.721 [2024-07-14 18:47:41.832541] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:53.721 [2024-07-14 18:47:41.832777] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:53.721 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.721 18:47:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:53.721 18:47:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:54.653 18:47:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:54.912 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:54.912 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:54.912 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:54.912 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:54.912 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:55.179 Malloc1 00:15:55.179 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:55.442 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:55.699 18:47:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:55.957 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:55.957 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:55.957 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:56.215 Malloc2 00:15:56.215 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:56.473 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:56.730 18:47:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3563445 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3563445 ']' 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3563445 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3563445 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3563445' 00:15:56.988 killing process with pid 3563445 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3563445 00:15:56.988 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3563445 00:15:57.247 18:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:57.247 18:47:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:57.247 00:15:57.247 real 0m52.949s 00:15:57.247 user 3m29.285s 00:15:57.247 sys 0m4.276s 00:15:57.247 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.247 18:47:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:57.247 ************************************ 00:15:57.247 END TEST nvmf_vfio_user 00:15:57.247 ************************************ 00:15:57.247 18:47:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:57.247 18:47:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:57.247 18:47:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.247 18:47:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.247 18:47:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:57.505 ************************************ 00:15:57.505 START TEST nvmf_vfio_user_nvme_compliance 00:15:57.505 ************************************ 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:57.505 * Looking for test storage... 00:15:57.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3564042 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3564042' 00:15:57.505 Process pid: 3564042 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3564042 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3564042 ']' 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.505 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:57.505 [2024-07-14 18:47:45.583241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:57.506 [2024-07-14 18:47:45.583335] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.506 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.506 [2024-07-14 18:47:45.641642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.506 [2024-07-14 18:47:45.725472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.506 [2024-07-14 18:47:45.725522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.506 [2024-07-14 18:47:45.725549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.506 [2024-07-14 18:47:45.725560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.506 [2024-07-14 18:47:45.725569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.506 [2024-07-14 18:47:45.725650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.506 [2024-07-14 18:47:45.725716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.506 [2024-07-14 18:47:45.725719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.763 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.763 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:57.763 18:47:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 malloc0 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.696 18:47:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:58.953 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.953 00:15:58.953 00:15:58.953 CUnit - A unit testing framework for C - Version 2.1-3 00:15:58.954 http://cunit.sourceforge.net/ 00:15:58.954 00:15:58.954 00:15:58.954 Suite: nvme_compliance 00:15:58.954 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 18:47:47.080451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.954 [2024-07-14 18:47:47.081855] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:58.954 [2024-07-14 18:47:47.081903] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:58.954 [2024-07-14 18:47:47.081916] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:58.954 [2024-07-14 18:47:47.083473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.954 passed 00:15:58.954 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 18:47:47.171061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.954 [2024-07-14 18:47:47.174084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.211 passed 00:15:59.211 Test: admin_identify_ns ...[2024-07-14 18:47:47.259406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.211 [2024-07-14 18:47:47.318908] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:59.211 [2024-07-14 18:47:47.326895] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:59.211 [2024-07-14 18:47:47.348032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.211 passed 00:15:59.211 Test: admin_get_features_mandatory_features ...[2024-07-14 18:47:47.430192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.211 [2024-07-14 18:47:47.433217] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.468 passed 00:15:59.468 Test: admin_get_features_optional_features ...[2024-07-14 18:47:47.519822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.468 [2024-07-14 18:47:47.522839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.468 passed 00:15:59.468 Test: admin_set_features_number_of_queues ...[2024-07-14 18:47:47.608037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.725 [2024-07-14 18:47:47.713019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.725 passed 00:15:59.725 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 18:47:47.795746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.725 [2024-07-14 18:47:47.798772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.725 passed 00:15:59.725 Test: admin_get_log_page_with_lpo ...[2024-07-14 18:47:47.882892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.983 [2024-07-14 18:47:47.951893] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:59.983 [2024-07-14 18:47:47.964950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.983 passed 00:15:59.983 Test: fabric_property_get ...[2024-07-14 18:47:48.047608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.983 [2024-07-14 18:47:48.048946] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:59.983 [2024-07-14 18:47:48.050631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.983 passed 00:15:59.983 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 18:47:48.132170] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.983 [2024-07-14 18:47:48.133467] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:59.983 [2024-07-14 18:47:48.137222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:59.983 passed 00:16:00.240 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 18:47:48.218449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.240 [2024-07-14 18:47:48.304890] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:00.240 [2024-07-14 18:47:48.320902] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:00.240 [2024-07-14 18:47:48.326011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.240 passed 00:16:00.240 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 18:47:48.411641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.240 [2024-07-14 18:47:48.412944] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:00.240 [2024-07-14 18:47:48.414669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.240 passed 00:16:00.498 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 18:47:48.496339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.498 [2024-07-14 18:47:48.574900] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:00.498 [2024-07-14 18:47:48.598903] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:00.498 [2024-07-14 18:47:48.604011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.498 passed 00:16:00.498 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 18:47:48.690150] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.498 [2024-07-14 18:47:48.691454] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:00.498 [2024-07-14 18:47:48.691504] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:00.498 [2024-07-14 18:47:48.693193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.498 passed 00:16:00.756 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 18:47:48.774435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.756 [2024-07-14 18:47:48.864889] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:00.756 [2024-07-14 18:47:48.872904] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:00.756 [2024-07-14 18:47:48.880919] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:00.756 [2024-07-14 18:47:48.888898] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:00.756 [2024-07-14 18:47:48.917993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.756 passed 00:16:01.013 Test: admin_create_io_sq_verify_pc ...[2024-07-14 18:47:49.003078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.013 [2024-07-14 18:47:49.020898] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:01.013 [2024-07-14 18:47:49.038435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.013 passed 00:16:01.013 Test: admin_create_io_qp_max_qps ...[2024-07-14 18:47:49.123978] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.386 [2024-07-14 18:47:50.236893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:02.644 [2024-07-14 18:47:50.634300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.644 passed 00:16:02.644 Test: admin_create_io_sq_shared_cq ...[2024-07-14 18:47:50.716421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.644 [2024-07-14 18:47:50.847899] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:02.902 [2024-07-14 18:47:50.884986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.902 passed 00:16:02.902 00:16:02.902 Run Summary: Type Total Ran Passed Failed Inactive 00:16:02.902 suites 1 1 n/a 0 0 00:16:02.902 tests 18 18 18 0 0 00:16:02.902 asserts 360 360 360 0 n/a 00:16:02.902 00:16:02.902 Elapsed time = 1.581 seconds 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3564042 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3564042 ']' 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3564042 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3564042 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3564042' 00:16:02.902 killing process with pid 3564042 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3564042 00:16:02.902 18:47:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3564042 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:03.160 00:16:03.160 real 0m5.756s 00:16:03.160 user 0m16.200s 00:16:03.160 sys 0m0.558s 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:03.160 ************************************ 00:16:03.160 END TEST nvmf_vfio_user_nvme_compliance 00:16:03.160 ************************************ 00:16:03.160 18:47:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:03.160 18:47:51 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:03.160 18:47:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:03.160 18:47:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.160 18:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.160 ************************************ 00:16:03.160 START TEST nvmf_vfio_user_fuzz 00:16:03.160 ************************************ 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:03.160 * Looking for test storage... 00:16:03.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3564756 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3564756' 00:16:03.160 Process pid: 3564756 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3564756 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3564756 ']' 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.160 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.724 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.724 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:03.724 18:47:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 malloc0 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:04.656 18:47:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:36.719 Fuzzing completed. Shutting down the fuzz application 00:16:36.719 00:16:36.719 Dumping successful admin opcodes: 00:16:36.719 8, 9, 10, 24, 00:16:36.719 Dumping successful io opcodes: 00:16:36.719 0, 00:16:36.719 NS: 0x200003a1ef00 I/O qp, Total commands completed: 731853, total successful commands: 2843, random_seed: 3792561024 00:16:36.719 NS: 0x200003a1ef00 admin qp, Total commands completed: 103614, total successful commands: 856, random_seed: 995600640 00:16:36.719 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:36.719 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3564756 ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3564756' 00:16:36.720 killing process with pid 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3564756 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:36.720 00:16:36.720 real 0m32.207s 00:16:36.720 user 0m33.677s 00:16:36.720 sys 0m27.951s 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.720 18:48:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.720 ************************************ 00:16:36.720 END TEST nvmf_vfio_user_fuzz 00:16:36.720 ************************************ 00:16:36.720 18:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:36.720 18:48:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:36.720 18:48:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.720 18:48:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.720 18:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.720 ************************************ 00:16:36.720 START TEST nvmf_host_management 00:16:36.720 ************************************ 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:36.720 * Looking for test storage... 00:16:36.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.720 18:48:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:37.668 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:37.668 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.668 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:37.669 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:37.669 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:37.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:16:37.669 00:16:37.669 --- 10.0.0.2 ping statistics --- 00:16:37.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.669 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:37.669 00:16:37.669 --- 10.0.0.1 ping statistics --- 00:16:37.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.669 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3570690 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3570690 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3570690 ']' 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.669 18:48:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.669 [2024-07-14 18:48:25.757395] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:37.669 [2024-07-14 18:48:25.757469] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.669 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.669 [2024-07-14 18:48:25.820435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.927 [2024-07-14 18:48:25.912552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.927 [2024-07-14 18:48:25.912599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.927 [2024-07-14 18:48:25.912627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.927 [2024-07-14 18:48:25.912639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.927 [2024-07-14 18:48:25.912649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.927 [2024-07-14 18:48:25.912808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.927 [2024-07-14 18:48:25.912869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.927 [2024-07-14 18:48:25.912898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:37.927 [2024-07-14 18:48:25.912900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 [2024-07-14 18:48:26.054612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 Malloc0 00:16:37.927 [2024-07-14 18:48:26.115506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3570858 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3570858 /var/tmp/bdevperf.sock 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3570858 ']' 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:37.927 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:37.928 { 00:16:37.928 "params": { 00:16:37.928 "name": "Nvme$subsystem", 00:16:37.928 "trtype": "$TEST_TRANSPORT", 00:16:37.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.928 "adrfam": "ipv4", 00:16:37.928 "trsvcid": "$NVMF_PORT", 00:16:37.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.928 "hdgst": ${hdgst:-false}, 00:16:37.928 "ddgst": ${ddgst:-false} 00:16:37.928 }, 00:16:37.928 "method": "bdev_nvme_attach_controller" 00:16:37.928 } 00:16:37.928 EOF 00:16:37.928 )") 00:16:37.928 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:38.185 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:38.185 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:38.185 18:48:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:38.185 "params": { 00:16:38.185 "name": "Nvme0", 00:16:38.185 "trtype": "tcp", 00:16:38.185 "traddr": "10.0.0.2", 00:16:38.185 "adrfam": "ipv4", 00:16:38.185 "trsvcid": "4420", 00:16:38.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:38.185 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:38.185 "hdgst": false, 00:16:38.185 "ddgst": false 00:16:38.185 }, 00:16:38.185 "method": "bdev_nvme_attach_controller" 00:16:38.185 }' 00:16:38.185 [2024-07-14 18:48:26.193813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:38.185 [2024-07-14 18:48:26.193925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570858 ] 00:16:38.185 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.185 [2024-07-14 18:48:26.255181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.185 [2024-07-14 18:48:26.341356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.443 Running I/O for 10 seconds... 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:38.443 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.701 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.960 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:38.960 [2024-07-14 18:48:26.958370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.958629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24dd0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.960542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.960 [2024-07-14 18:48:26.960581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.960599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.960 [2024-07-14 18:48:26.960613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.960635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.960 [2024-07-14 18:48:26.960649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.960663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.960 [2024-07-14 18:48:26.960678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.960691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3ed0 is same with the state(5) to be set 00:16:38.960 [2024-07-14 18:48:26.961061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.960 [2024-07-14 18:48:26.961398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.960 [2024-07-14 18:48:26.961413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.961976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.961992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 [2024-07-14 18:48:26.962646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.961 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.961 [2024-07-14 18:48:26.962677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.961 [2024-07-14 18:48:26.962691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:38.962 [2024-07-14 18:48:26.962827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.962967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89600 len:1 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.962 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.962984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.963000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.963015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.963030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.963044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 [2024-07-14 18:48:26.963061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:38.962 [2024-07-14 18:48:26.963076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.962 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:38.962 [2024-07-14 18:48:26.963155] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26b5100 was disconnected and freed. reset controller. 00:16:38.962 [2024-07-14 18:48:26.964269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:38.962 task offset: 81920 on job bdev=Nvme0n1 fails 00:16:38.962 00:16:38.962 Latency(us) 00:16:38.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.962 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:38.962 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:38.962 Verification LBA range: start 0x0 length 0x400 00:16:38.962 Nvme0n1 : 0.40 1591.42 99.46 159.14 0.00 35508.16 3034.07 34175.81 00:16:38.962 =================================================================================================================== 00:16:38.962 Total : 1591.42 99.46 159.14 0.00 35508.16 3034.07 34175.81 00:16:38.962 [2024-07-14 18:48:26.966122] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:38.962 [2024-07-14 18:48:26.966150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a3ed0 (9): Bad file descriptor 00:16:38.962 18:48:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.962 18:48:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:38.962 [2024-07-14 18:48:26.970918] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3570858 00:16:39.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3570858) - No such process 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:39.896 { 00:16:39.896 "params": { 00:16:39.896 "name": "Nvme$subsystem", 00:16:39.896 "trtype": "$TEST_TRANSPORT", 00:16:39.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:39.896 "adrfam": "ipv4", 00:16:39.896 "trsvcid": "$NVMF_PORT", 00:16:39.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:39.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:39.896 "hdgst": ${hdgst:-false}, 00:16:39.896 "ddgst": ${ddgst:-false} 00:16:39.896 }, 00:16:39.896 "method": "bdev_nvme_attach_controller" 00:16:39.896 } 00:16:39.896 EOF 00:16:39.896 )") 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:39.896 18:48:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:39.896 "params": { 00:16:39.896 "name": "Nvme0", 00:16:39.896 "trtype": "tcp", 00:16:39.896 "traddr": "10.0.0.2", 00:16:39.896 "adrfam": "ipv4", 00:16:39.896 "trsvcid": "4420", 00:16:39.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:39.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:39.896 "hdgst": false, 00:16:39.896 "ddgst": false 00:16:39.896 }, 00:16:39.896 "method": "bdev_nvme_attach_controller" 00:16:39.896 }' 00:16:39.896 [2024-07-14 18:48:28.020619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:39.896 [2024-07-14 18:48:28.020692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3571016 ] 00:16:39.896 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.896 [2024-07-14 18:48:28.081069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.154 [2024-07-14 18:48:28.170665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.411 Running I/O for 1 seconds... 00:16:41.344 00:16:41.344 Latency(us) 00:16:41.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.344 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:41.344 Verification LBA range: start 0x0 length 0x400 00:16:41.344 Nvme0n1 : 1.02 1636.98 102.31 0.00 0.00 38462.68 6262.33 32622.36 00:16:41.344 =================================================================================================================== 00:16:41.344 Total : 1636.98 102.31 0.00 0.00 38462.68 6262.33 32622.36 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.601 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.601 rmmod nvme_tcp 00:16:41.601 rmmod nvme_fabrics 00:16:41.601 rmmod nvme_keyring 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3570690 ']' 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3570690 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3570690 ']' 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3570690 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.602 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3570690 00:16:41.860 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:41.861 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:41.861 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3570690' 00:16:41.861 killing process with pid 3570690 00:16:41.861 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3570690 00:16:41.861 18:48:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3570690 00:16:41.861 [2024-07-14 18:48:30.041122] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.861 18:48:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.394 18:48:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.394 18:48:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:44.394 00:16:44.394 real 0m8.574s 00:16:44.394 user 0m19.606s 00:16:44.394 sys 0m2.544s 00:16:44.394 18:48:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.394 18:48:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:44.394 ************************************ 00:16:44.394 END TEST nvmf_host_management 00:16:44.394 ************************************ 00:16:44.394 18:48:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.394 18:48:32 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:44.394 18:48:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.394 18:48:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.394 18:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.394 ************************************ 00:16:44.394 START TEST nvmf_lvol 00:16:44.394 ************************************ 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:44.394 * Looking for test storage... 00:16:44.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.394 18:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.395 18:48:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:46.293 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:46.293 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:46.293 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:46.293 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.293 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:16:46.294 00:16:46.294 --- 10.0.0.2 ping statistics --- 00:16:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.294 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:16:46.294 00:16:46.294 --- 10.0.0.1 ping statistics --- 00:16:46.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.294 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3573228 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3573228 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3573228 ']' 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.294 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.294 [2024-07-14 18:48:34.407993] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:46.294 [2024-07-14 18:48:34.408064] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.294 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.294 [2024-07-14 18:48:34.476758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.552 [2024-07-14 18:48:34.567434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.552 [2024-07-14 18:48:34.567496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.552 [2024-07-14 18:48:34.567511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.552 [2024-07-14 18:48:34.567525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.552 [2024-07-14 18:48:34.567537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.552 [2024-07-14 18:48:34.567618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.552 [2024-07-14 18:48:34.567689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.552 [2024-07-14 18:48:34.567691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.552 18:48:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:46.810 [2024-07-14 18:48:34.905985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.810 18:48:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.068 18:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:47.068 18:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.326 18:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:47.326 18:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:47.584 18:48:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:47.842 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b10ac8de-05a8-4bd2-afc7-4e5fea2838ea 00:16:47.842 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b10ac8de-05a8-4bd2-afc7-4e5fea2838ea lvol 20 00:16:48.100 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=908e825f-e41f-47e3-bbb1-360571ebe9c6 00:16:48.100 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:48.358 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 908e825f-e41f-47e3-bbb1-360571ebe9c6 00:16:48.616 18:48:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:48.875 [2024-07-14 18:48:37.093438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.134 18:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.392 18:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3573648 00:16:49.392 18:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:49.392 18:48:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:49.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.327 18:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 908e825f-e41f-47e3-bbb1-360571ebe9c6 MY_SNAPSHOT 00:16:50.585 18:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f3ab6a2-eb66-4e87-a721-3d8f3c0362d9 00:16:50.585 18:48:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 908e825f-e41f-47e3-bbb1-360571ebe9c6 30 00:16:50.843 18:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1f3ab6a2-eb66-4e87-a721-3d8f3c0362d9 MY_CLONE 00:16:51.409 18:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=811b5cc8-9579-4580-b9c8-9d505bd79358 00:16:51.409 18:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 811b5cc8-9579-4580-b9c8-9d505bd79358 00:16:51.975 18:48:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3573648 00:17:00.123 Initializing NVMe Controllers 00:17:00.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:00.123 Controller IO queue size 128, less than required. 00:17:00.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:00.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:00.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:00.123 Initialization complete. Launching workers. 00:17:00.123 ======================================================== 00:17:00.124 Latency(us) 00:17:00.124 Device Information : IOPS MiB/s Average min max 00:17:00.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10415.40 40.69 12294.97 1888.73 68789.21 00:17:00.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10509.00 41.05 12187.95 1972.74 73272.63 00:17:00.124 ======================================================== 00:17:00.124 Total : 20924.40 81.74 12241.22 1888.73 73272.63 00:17:00.124 00:17:00.124 18:48:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:00.124 18:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 908e825f-e41f-47e3-bbb1-360571ebe9c6 00:17:00.124 18:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b10ac8de-05a8-4bd2-afc7-4e5fea2838ea 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:00.381 rmmod nvme_tcp 00:17:00.381 rmmod nvme_fabrics 00:17:00.381 rmmod nvme_keyring 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3573228 ']' 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3573228 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3573228 ']' 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3573228 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3573228 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3573228' 00:17:00.381 killing process with pid 3573228 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3573228 00:17:00.381 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3573228 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.950 18:48:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.851 18:48:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:02.851 00:17:02.851 real 0m18.778s 00:17:02.851 user 1m4.620s 00:17:02.851 sys 0m5.402s 00:17:02.852 18:48:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.852 18:48:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:02.852 ************************************ 00:17:02.852 END TEST nvmf_lvol 00:17:02.852 ************************************ 00:17:02.852 18:48:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:02.852 18:48:50 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:02.852 18:48:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:02.852 18:48:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.852 18:48:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:02.852 ************************************ 00:17:02.852 START TEST nvmf_lvs_grow 00:17:02.852 ************************************ 00:17:02.852 18:48:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:02.852 * Looking for test storage... 00:17:02.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:02.852 18:48:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.393 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:05.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:05.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:05.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:05.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:17:05.394 00:17:05.394 --- 10.0.0.2 ping statistics --- 00:17:05.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.394 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:17:05.394 00:17:05.394 --- 10.0.0.1 ping statistics --- 00:17:05.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.394 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3576905 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3576905 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3576905 ']' 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.394 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:05.394 [2024-07-14 18:48:53.310138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:05.394 [2024-07-14 18:48:53.310227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.394 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.394 [2024-07-14 18:48:53.378419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.394 [2024-07-14 18:48:53.467570] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.394 [2024-07-14 18:48:53.467630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.394 [2024-07-14 18:48:53.467647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.395 [2024-07-14 18:48:53.467661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.395 [2024-07-14 18:48:53.467672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.395 [2024-07-14 18:48:53.467701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.395 18:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.652 [2024-07-14 18:48:53.828725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.652 18:48:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:05.652 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:05.652 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.652 18:48:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:05.911 ************************************ 00:17:05.911 START TEST lvs_grow_clean 00:17:05.911 ************************************ 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:05.911 18:48:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.170 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:06.170 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:06.429 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:06.429 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:06.429 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:06.687 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:06.687 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:06.687 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 lvol 150 00:17:06.945 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=765e17b7-cbe6-4c46-9cc7-05e4a6c356bf 00:17:06.945 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:06.945 18:48:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:07.203 [2024-07-14 18:48:55.188098] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:07.203 [2024-07-14 18:48:55.188177] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:07.203 true 00:17:07.203 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:07.203 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:07.462 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:07.462 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:07.720 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 765e17b7-cbe6-4c46-9cc7-05e4a6c356bf 00:17:07.978 18:48:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:08.235 [2024-07-14 18:48:56.223271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.235 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3577243 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3577243 /var/tmp/bdevperf.sock 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3577243 ']' 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.493 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:08.493 [2024-07-14 18:48:56.574756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:08.493 [2024-07-14 18:48:56.574834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577243 ] 00:17:08.493 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.493 [2024-07-14 18:48:56.642284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.751 [2024-07-14 18:48:56.735147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.751 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.751 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:08.751 18:48:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:09.009 Nvme0n1 00:17:09.009 18:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:09.267 [ 00:17:09.267 { 00:17:09.267 "name": "Nvme0n1", 00:17:09.267 "aliases": [ 00:17:09.268 "765e17b7-cbe6-4c46-9cc7-05e4a6c356bf" 00:17:09.268 ], 00:17:09.268 "product_name": "NVMe disk", 00:17:09.268 "block_size": 4096, 00:17:09.268 "num_blocks": 38912, 00:17:09.268 "uuid": "765e17b7-cbe6-4c46-9cc7-05e4a6c356bf", 00:17:09.268 "assigned_rate_limits": { 00:17:09.268 "rw_ios_per_sec": 0, 00:17:09.268 "rw_mbytes_per_sec": 0, 00:17:09.268 "r_mbytes_per_sec": 0, 00:17:09.268 "w_mbytes_per_sec": 0 00:17:09.268 }, 00:17:09.268 "claimed": false, 00:17:09.268 "zoned": false, 00:17:09.268 "supported_io_types": { 00:17:09.268 "read": true, 00:17:09.268 "write": true, 00:17:09.268 "unmap": true, 00:17:09.268 "flush": true, 00:17:09.268 "reset": true, 00:17:09.268 "nvme_admin": true, 00:17:09.268 "nvme_io": true, 00:17:09.268 "nvme_io_md": false, 00:17:09.268 "write_zeroes": true, 00:17:09.268 "zcopy": false, 00:17:09.268 "get_zone_info": false, 00:17:09.268 "zone_management": false, 00:17:09.268 "zone_append": false, 00:17:09.268 "compare": true, 00:17:09.268 "compare_and_write": true, 00:17:09.268 "abort": true, 00:17:09.268 "seek_hole": false, 00:17:09.268 "seek_data": false, 00:17:09.268 "copy": true, 00:17:09.268 "nvme_iov_md": false 00:17:09.268 }, 00:17:09.268 "memory_domains": [ 00:17:09.268 { 00:17:09.268 "dma_device_id": "system", 00:17:09.268 "dma_device_type": 1 00:17:09.268 } 00:17:09.268 ], 00:17:09.268 "driver_specific": { 00:17:09.268 "nvme": [ 00:17:09.268 { 00:17:09.268 "trid": { 00:17:09.268 "trtype": "TCP", 00:17:09.268 "adrfam": "IPv4", 00:17:09.268 "traddr": "10.0.0.2", 00:17:09.268 "trsvcid": "4420", 00:17:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:09.268 }, 00:17:09.268 "ctrlr_data": { 00:17:09.268 "cntlid": 1, 00:17:09.268 "vendor_id": "0x8086", 00:17:09.268 "model_number": "SPDK bdev Controller", 00:17:09.268 "serial_number": "SPDK0", 00:17:09.268 "firmware_revision": "24.09", 00:17:09.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.268 "oacs": { 00:17:09.268 "security": 0, 00:17:09.268 "format": 0, 00:17:09.268 "firmware": 0, 00:17:09.268 "ns_manage": 0 00:17:09.268 }, 00:17:09.268 "multi_ctrlr": true, 00:17:09.268 "ana_reporting": false 00:17:09.268 }, 00:17:09.268 "vs": { 00:17:09.268 "nvme_version": "1.3" 00:17:09.268 }, 00:17:09.268 "ns_data": { 00:17:09.268 "id": 1, 00:17:09.268 "can_share": true 00:17:09.268 } 00:17:09.268 } 00:17:09.268 ], 00:17:09.268 "mp_policy": "active_passive" 00:17:09.268 } 00:17:09.268 } 00:17:09.268 ] 00:17:09.268 18:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3577356 00:17:09.268 18:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:09.268 18:48:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.526 Running I/O for 10 seconds... 00:17:10.460 Latency(us) 00:17:10.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.460 Nvme0n1 : 1.00 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:17:10.460 =================================================================================================================== 00:17:10.460 Total : 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:17:10.460 00:17:11.395 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:11.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.653 Nvme0n1 : 2.00 14161.00 55.32 0.00 0.00 0.00 0.00 0.00 00:17:11.653 =================================================================================================================== 00:17:11.653 Total : 14161.00 55.32 0.00 0.00 0.00 0.00 0.00 00:17:11.653 00:17:11.653 true 00:17:11.653 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:11.653 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:11.911 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:11.911 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:11.911 18:48:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3577356 00:17:12.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.478 Nvme0n1 : 3.00 14267.67 55.73 0.00 0.00 0.00 0.00 0.00 00:17:12.478 =================================================================================================================== 00:17:12.478 Total : 14267.67 55.73 0.00 0.00 0.00 0.00 0.00 00:17:12.478 00:17:13.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.853 Nvme0n1 : 4.00 14368.25 56.13 0.00 0.00 0.00 0.00 0.00 00:17:13.853 =================================================================================================================== 00:17:13.853 Total : 14368.25 56.13 0.00 0.00 0.00 0.00 0.00 00:17:13.853 00:17:14.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.791 Nvme0n1 : 5.00 14441.00 56.41 0.00 0.00 0.00 0.00 0.00 00:17:14.791 =================================================================================================================== 00:17:14.791 Total : 14441.00 56.41 0.00 0.00 0.00 0.00 0.00 00:17:14.791 00:17:15.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.727 Nvme0n1 : 6.00 14481.17 56.57 0.00 0.00 0.00 0.00 0.00 00:17:15.727 =================================================================================================================== 00:17:15.727 Total : 14481.17 56.57 0.00 0.00 0.00 0.00 0.00 00:17:15.727 00:17:16.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.660 Nvme0n1 : 7.00 14517.00 56.71 0.00 0.00 0.00 0.00 0.00 00:17:16.660 =================================================================================================================== 00:17:16.660 Total : 14517.00 56.71 0.00 0.00 0.00 0.00 0.00 00:17:16.660 00:17:17.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.621 Nvme0n1 : 8.00 14543.88 56.81 0.00 0.00 0.00 0.00 0.00 00:17:17.621 =================================================================================================================== 00:17:17.621 Total : 14543.88 56.81 0.00 0.00 0.00 0.00 0.00 00:17:17.621 00:17:18.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.553 Nvme0n1 : 9.00 14578.89 56.95 0.00 0.00 0.00 0.00 0.00 00:17:18.553 =================================================================================================================== 00:17:18.553 Total : 14578.89 56.95 0.00 0.00 0.00 0.00 0.00 00:17:18.553 00:17:19.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.486 Nvme0n1 : 10.00 14606.90 57.06 0.00 0.00 0.00 0.00 0.00 00:17:19.486 =================================================================================================================== 00:17:19.486 Total : 14606.90 57.06 0.00 0.00 0.00 0.00 0.00 00:17:19.486 00:17:19.486 00:17:19.486 Latency(us) 00:17:19.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.486 Nvme0n1 : 10.01 14606.78 57.06 0.00 0.00 8758.02 5364.24 18641.35 00:17:19.486 =================================================================================================================== 00:17:19.486 Total : 14606.78 57.06 0.00 0.00 8758.02 5364.24 18641.35 00:17:19.486 0 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3577243 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3577243 ']' 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3577243 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.486 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3577243 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3577243' 00:17:19.744 killing process with pid 3577243 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3577243 00:17:19.744 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.744 00:17:19.744 Latency(us) 00:17:19.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.744 =================================================================================================================== 00:17:19.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3577243 00:17:19.744 18:49:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.309 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:20.566 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:20.566 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:20.824 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:20.824 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:20.824 18:49:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:20.824 [2024-07-14 18:49:09.027707] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:21.082 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:21.082 request: 00:17:21.082 { 00:17:21.082 "uuid": "8eaf1057-053c-46d0-b114-7aa441a7c9b8", 00:17:21.082 "method": "bdev_lvol_get_lvstores", 00:17:21.082 "req_id": 1 00:17:21.082 } 00:17:21.082 Got JSON-RPC error response 00:17:21.082 response: 00:17:21.082 { 00:17:21.082 "code": -19, 00:17:21.082 "message": "No such device" 00:17:21.082 } 00:17:21.340 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:21.340 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:21.340 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:21.340 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:21.340 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:21.340 aio_bdev 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 765e17b7-cbe6-4c46-9cc7-05e4a6c356bf 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=765e17b7-cbe6-4c46-9cc7-05e4a6c356bf 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.599 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.856 18:49:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 765e17b7-cbe6-4c46-9cc7-05e4a6c356bf -t 2000 00:17:22.114 [ 00:17:22.114 { 00:17:22.114 "name": "765e17b7-cbe6-4c46-9cc7-05e4a6c356bf", 00:17:22.114 "aliases": [ 00:17:22.114 "lvs/lvol" 00:17:22.114 ], 00:17:22.114 "product_name": "Logical Volume", 00:17:22.114 "block_size": 4096, 00:17:22.114 "num_blocks": 38912, 00:17:22.114 "uuid": "765e17b7-cbe6-4c46-9cc7-05e4a6c356bf", 00:17:22.114 "assigned_rate_limits": { 00:17:22.114 "rw_ios_per_sec": 0, 00:17:22.114 "rw_mbytes_per_sec": 0, 00:17:22.114 "r_mbytes_per_sec": 0, 00:17:22.114 "w_mbytes_per_sec": 0 00:17:22.114 }, 00:17:22.114 "claimed": false, 00:17:22.114 "zoned": false, 00:17:22.114 "supported_io_types": { 00:17:22.114 "read": true, 00:17:22.114 "write": true, 00:17:22.114 "unmap": true, 00:17:22.114 "flush": false, 00:17:22.114 "reset": true, 00:17:22.114 "nvme_admin": false, 00:17:22.114 "nvme_io": false, 00:17:22.114 "nvme_io_md": false, 00:17:22.114 "write_zeroes": true, 00:17:22.114 "zcopy": false, 00:17:22.114 "get_zone_info": false, 00:17:22.114 "zone_management": false, 00:17:22.114 "zone_append": false, 00:17:22.114 "compare": false, 00:17:22.114 "compare_and_write": false, 00:17:22.114 "abort": false, 00:17:22.114 "seek_hole": true, 00:17:22.114 "seek_data": true, 00:17:22.114 "copy": false, 00:17:22.114 "nvme_iov_md": false 00:17:22.114 }, 00:17:22.114 "driver_specific": { 00:17:22.114 "lvol": { 00:17:22.114 "lvol_store_uuid": "8eaf1057-053c-46d0-b114-7aa441a7c9b8", 00:17:22.114 "base_bdev": "aio_bdev", 00:17:22.114 "thin_provision": false, 00:17:22.114 "num_allocated_clusters": 38, 00:17:22.114 "snapshot": false, 00:17:22.114 "clone": false, 00:17:22.114 "esnap_clone": false 00:17:22.114 } 00:17:22.114 } 00:17:22.114 } 00:17:22.114 ] 00:17:22.114 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:22.114 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:22.114 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:22.372 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:22.372 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:22.372 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:22.630 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:22.630 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 765e17b7-cbe6-4c46-9cc7-05e4a6c356bf 00:17:22.889 18:49:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8eaf1057-053c-46d0-b114-7aa441a7c9b8 00:17:23.147 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.404 00:17:23.404 real 0m17.617s 00:17:23.404 user 0m17.096s 00:17:23.404 sys 0m1.888s 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:23.404 ************************************ 00:17:23.404 END TEST lvs_grow_clean 00:17:23.404 ************************************ 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:23.404 ************************************ 00:17:23.404 START TEST lvs_grow_dirty 00:17:23.404 ************************************ 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.404 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.662 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:23.662 18:49:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:23.919 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:23.919 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:23.919 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:24.178 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:24.178 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:24.178 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 lvol 150 00:17:24.437 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:24.437 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:24.437 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:24.694 [2024-07-14 18:49:12.838089] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:24.694 [2024-07-14 18:49:12.838188] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:24.694 true 00:17:24.694 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:24.694 18:49:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:24.951 18:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:24.951 18:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:25.208 18:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:25.465 18:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.723 [2024-07-14 18:49:13.833125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.723 18:49:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3579397 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3579397 /var/tmp/bdevperf.sock 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3579397 ']' 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.981 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:25.981 [2024-07-14 18:49:14.132995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:25.981 [2024-07-14 18:49:14.133084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579397 ] 00:17:25.981 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.981 [2024-07-14 18:49:14.195456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.238 [2024-07-14 18:49:14.286437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.238 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.238 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:26.238 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.806 Nvme0n1 00:17:26.806 18:49:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:27.065 [ 00:17:27.065 { 00:17:27.065 "name": "Nvme0n1", 00:17:27.065 "aliases": [ 00:17:27.065 "f5da52fb-8edc-45e2-945d-e2ec77bf3157" 00:17:27.065 ], 00:17:27.065 "product_name": "NVMe disk", 00:17:27.065 "block_size": 4096, 00:17:27.065 "num_blocks": 38912, 00:17:27.065 "uuid": "f5da52fb-8edc-45e2-945d-e2ec77bf3157", 00:17:27.065 "assigned_rate_limits": { 00:17:27.065 "rw_ios_per_sec": 0, 00:17:27.065 "rw_mbytes_per_sec": 0, 00:17:27.065 "r_mbytes_per_sec": 0, 00:17:27.065 "w_mbytes_per_sec": 0 00:17:27.065 }, 00:17:27.065 "claimed": false, 00:17:27.065 "zoned": false, 00:17:27.065 "supported_io_types": { 00:17:27.065 "read": true, 00:17:27.065 "write": true, 00:17:27.065 "unmap": true, 00:17:27.065 "flush": true, 00:17:27.065 "reset": true, 00:17:27.065 "nvme_admin": true, 00:17:27.065 "nvme_io": true, 00:17:27.065 "nvme_io_md": false, 00:17:27.065 "write_zeroes": true, 00:17:27.065 "zcopy": false, 00:17:27.065 "get_zone_info": false, 00:17:27.065 "zone_management": false, 00:17:27.065 "zone_append": false, 00:17:27.065 "compare": true, 00:17:27.065 "compare_and_write": true, 00:17:27.065 "abort": true, 00:17:27.065 "seek_hole": false, 00:17:27.065 "seek_data": false, 00:17:27.065 "copy": true, 00:17:27.065 "nvme_iov_md": false 00:17:27.065 }, 00:17:27.065 "memory_domains": [ 00:17:27.065 { 00:17:27.065 "dma_device_id": "system", 00:17:27.065 "dma_device_type": 1 00:17:27.065 } 00:17:27.065 ], 00:17:27.065 "driver_specific": { 00:17:27.065 "nvme": [ 00:17:27.065 { 00:17:27.065 "trid": { 00:17:27.065 "trtype": "TCP", 00:17:27.065 "adrfam": "IPv4", 00:17:27.065 "traddr": "10.0.0.2", 00:17:27.065 "trsvcid": "4420", 00:17:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:27.065 }, 00:17:27.065 "ctrlr_data": { 00:17:27.065 "cntlid": 1, 00:17:27.065 "vendor_id": "0x8086", 00:17:27.065 "model_number": "SPDK bdev Controller", 00:17:27.065 "serial_number": "SPDK0", 00:17:27.065 "firmware_revision": "24.09", 00:17:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.065 "oacs": { 00:17:27.065 "security": 0, 00:17:27.065 "format": 0, 00:17:27.065 "firmware": 0, 00:17:27.065 "ns_manage": 0 00:17:27.065 }, 00:17:27.065 "multi_ctrlr": true, 00:17:27.065 "ana_reporting": false 00:17:27.065 }, 00:17:27.065 "vs": { 00:17:27.065 "nvme_version": "1.3" 00:17:27.065 }, 00:17:27.065 "ns_data": { 00:17:27.065 "id": 1, 00:17:27.065 "can_share": true 00:17:27.065 } 00:17:27.065 } 00:17:27.065 ], 00:17:27.065 "mp_policy": "active_passive" 00:17:27.065 } 00:17:27.065 } 00:17:27.065 ] 00:17:27.065 18:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3579534 00:17:27.065 18:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:27.065 18:49:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.065 Running I/O for 10 seconds... 00:17:28.442 Latency(us) 00:17:28.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.442 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:28.442 =================================================================================================================== 00:17:28.442 Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:17:28.442 00:17:29.009 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:29.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.266 Nvme0n1 : 2.00 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:17:29.266 =================================================================================================================== 00:17:29.266 Total : 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:17:29.266 00:17:29.266 true 00:17:29.266 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:29.266 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:29.524 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:29.524 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:29.524 18:49:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3579534 00:17:30.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.089 Nvme0n1 : 3.00 14647.67 57.22 0.00 0.00 0.00 0.00 0.00 00:17:30.089 =================================================================================================================== 00:17:30.089 Total : 14647.67 57.22 0.00 0.00 0.00 0.00 0.00 00:17:30.089 00:17:31.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.466 Nvme0n1 : 4.00 14637.00 57.18 0.00 0.00 0.00 0.00 0.00 00:17:31.466 =================================================================================================================== 00:17:31.466 Total : 14637.00 57.18 0.00 0.00 0.00 0.00 0.00 00:17:31.466 00:17:32.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.405 Nvme0n1 : 5.00 14657.20 57.25 0.00 0.00 0.00 0.00 0.00 00:17:32.405 =================================================================================================================== 00:17:32.405 Total : 14657.20 57.25 0.00 0.00 0.00 0.00 0.00 00:17:32.405 00:17:33.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.344 Nvme0n1 : 6.00 14660.83 57.27 0.00 0.00 0.00 0.00 0.00 00:17:33.344 =================================================================================================================== 00:17:33.344 Total : 14660.83 57.27 0.00 0.00 0.00 0.00 0.00 00:17:33.344 00:17:34.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.341 Nvme0n1 : 7.00 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:17:34.341 =================================================================================================================== 00:17:34.341 Total : 14671.00 57.31 0.00 0.00 0.00 0.00 0.00 00:17:34.341 00:17:35.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.278 Nvme0n1 : 8.00 14679.38 57.34 0.00 0.00 0.00 0.00 0.00 00:17:35.278 =================================================================================================================== 00:17:35.278 Total : 14679.38 57.34 0.00 0.00 0.00 0.00 0.00 00:17:35.278 00:17:36.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.216 Nvme0n1 : 9.00 14706.56 57.45 0.00 0.00 0.00 0.00 0.00 00:17:36.216 =================================================================================================================== 00:17:36.216 Total : 14706.56 57.45 0.00 0.00 0.00 0.00 0.00 00:17:36.216 00:17:37.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.152 Nvme0n1 : 10.00 14709.10 57.46 0.00 0.00 0.00 0.00 0.00 00:17:37.152 =================================================================================================================== 00:17:37.152 Total : 14709.10 57.46 0.00 0.00 0.00 0.00 0.00 00:17:37.152 00:17:37.152 00:17:37.152 Latency(us) 00:17:37.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.152 Nvme0n1 : 10.00 14715.54 57.48 0.00 0.00 8693.08 5364.24 17087.91 00:17:37.152 =================================================================================================================== 00:17:37.152 Total : 14715.54 57.48 0.00 0.00 8693.08 5364.24 17087.91 00:17:37.152 0 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3579397 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3579397 ']' 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3579397 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3579397 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3579397' 00:17:37.152 killing process with pid 3579397 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3579397 00:17:37.152 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.152 00:17:37.152 Latency(us) 00:17:37.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.152 =================================================================================================================== 00:17:37.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.152 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3579397 00:17:37.410 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.667 18:49:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3576905 00:17:38.233 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3576905 00:17:38.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3576905 Killed "${NVMF_APP[@]}" "$@" 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3580861 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3580861 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3580861 ']' 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.493 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:38.493 [2024-07-14 18:49:26.523386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:38.493 [2024-07-14 18:49:26.523480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.493 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.493 [2024-07-14 18:49:26.591908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.493 [2024-07-14 18:49:26.678638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.493 [2024-07-14 18:49:26.678700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.493 [2024-07-14 18:49:26.678728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.493 [2024-07-14 18:49:26.678739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.493 [2024-07-14 18:49:26.678749] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.493 [2024-07-14 18:49:26.678778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.752 18:49:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.010 [2024-07-14 18:49:27.082858] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:39.010 [2024-07-14 18:49:27.083013] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:39.010 [2024-07-14 18:49:27.083070] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:39.010 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:39.270 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f5da52fb-8edc-45e2-945d-e2ec77bf3157 -t 2000 00:17:39.529 [ 00:17:39.529 { 00:17:39.529 "name": "f5da52fb-8edc-45e2-945d-e2ec77bf3157", 00:17:39.529 "aliases": [ 00:17:39.529 "lvs/lvol" 00:17:39.529 ], 00:17:39.529 "product_name": "Logical Volume", 00:17:39.529 "block_size": 4096, 00:17:39.529 "num_blocks": 38912, 00:17:39.529 "uuid": "f5da52fb-8edc-45e2-945d-e2ec77bf3157", 00:17:39.529 "assigned_rate_limits": { 00:17:39.529 "rw_ios_per_sec": 0, 00:17:39.529 "rw_mbytes_per_sec": 0, 00:17:39.529 "r_mbytes_per_sec": 0, 00:17:39.529 "w_mbytes_per_sec": 0 00:17:39.529 }, 00:17:39.529 "claimed": false, 00:17:39.529 "zoned": false, 00:17:39.529 "supported_io_types": { 00:17:39.529 "read": true, 00:17:39.529 "write": true, 00:17:39.529 "unmap": true, 00:17:39.529 "flush": false, 00:17:39.529 "reset": true, 00:17:39.529 "nvme_admin": false, 00:17:39.529 "nvme_io": false, 00:17:39.529 "nvme_io_md": false, 00:17:39.529 "write_zeroes": true, 00:17:39.529 "zcopy": false, 00:17:39.529 "get_zone_info": false, 00:17:39.529 "zone_management": false, 00:17:39.529 "zone_append": false, 00:17:39.529 "compare": false, 00:17:39.529 "compare_and_write": false, 00:17:39.529 "abort": false, 00:17:39.529 "seek_hole": true, 00:17:39.529 "seek_data": true, 00:17:39.529 "copy": false, 00:17:39.529 "nvme_iov_md": false 00:17:39.529 }, 00:17:39.529 "driver_specific": { 00:17:39.529 "lvol": { 00:17:39.529 "lvol_store_uuid": "7c10929e-3b1d-487f-a3d9-351031dcdd39", 00:17:39.529 "base_bdev": "aio_bdev", 00:17:39.529 "thin_provision": false, 00:17:39.529 "num_allocated_clusters": 38, 00:17:39.529 "snapshot": false, 00:17:39.529 "clone": false, 00:17:39.529 "esnap_clone": false 00:17:39.529 } 00:17:39.529 } 00:17:39.529 } 00:17:39.529 ] 00:17:39.529 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:39.529 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:39.529 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:39.786 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:39.786 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:39.786 18:49:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:40.044 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:40.044 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:40.303 [2024-07-14 18:49:28.327788] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:40.303 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:40.561 request: 00:17:40.561 { 00:17:40.561 "uuid": "7c10929e-3b1d-487f-a3d9-351031dcdd39", 00:17:40.561 "method": "bdev_lvol_get_lvstores", 00:17:40.561 "req_id": 1 00:17:40.561 } 00:17:40.561 Got JSON-RPC error response 00:17:40.561 response: 00:17:40.561 { 00:17:40.561 "code": -19, 00:17:40.561 "message": "No such device" 00:17:40.561 } 00:17:40.561 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:40.561 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.561 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.562 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.562 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:40.819 aio_bdev 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.819 18:49:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:41.077 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f5da52fb-8edc-45e2-945d-e2ec77bf3157 -t 2000 00:17:41.335 [ 00:17:41.335 { 00:17:41.335 "name": "f5da52fb-8edc-45e2-945d-e2ec77bf3157", 00:17:41.335 "aliases": [ 00:17:41.335 "lvs/lvol" 00:17:41.335 ], 00:17:41.335 "product_name": "Logical Volume", 00:17:41.335 "block_size": 4096, 00:17:41.335 "num_blocks": 38912, 00:17:41.335 "uuid": "f5da52fb-8edc-45e2-945d-e2ec77bf3157", 00:17:41.335 "assigned_rate_limits": { 00:17:41.335 "rw_ios_per_sec": 0, 00:17:41.335 "rw_mbytes_per_sec": 0, 00:17:41.335 "r_mbytes_per_sec": 0, 00:17:41.335 "w_mbytes_per_sec": 0 00:17:41.335 }, 00:17:41.335 "claimed": false, 00:17:41.335 "zoned": false, 00:17:41.335 "supported_io_types": { 00:17:41.335 "read": true, 00:17:41.335 "write": true, 00:17:41.335 "unmap": true, 00:17:41.335 "flush": false, 00:17:41.335 "reset": true, 00:17:41.335 "nvme_admin": false, 00:17:41.335 "nvme_io": false, 00:17:41.335 "nvme_io_md": false, 00:17:41.335 "write_zeroes": true, 00:17:41.335 "zcopy": false, 00:17:41.335 "get_zone_info": false, 00:17:41.335 "zone_management": false, 00:17:41.335 "zone_append": false, 00:17:41.335 "compare": false, 00:17:41.335 "compare_and_write": false, 00:17:41.335 "abort": false, 00:17:41.335 "seek_hole": true, 00:17:41.335 "seek_data": true, 00:17:41.335 "copy": false, 00:17:41.335 "nvme_iov_md": false 00:17:41.335 }, 00:17:41.335 "driver_specific": { 00:17:41.335 "lvol": { 00:17:41.335 "lvol_store_uuid": "7c10929e-3b1d-487f-a3d9-351031dcdd39", 00:17:41.335 "base_bdev": "aio_bdev", 00:17:41.335 "thin_provision": false, 00:17:41.335 "num_allocated_clusters": 38, 00:17:41.335 "snapshot": false, 00:17:41.335 "clone": false, 00:17:41.335 "esnap_clone": false 00:17:41.335 } 00:17:41.335 } 00:17:41.335 } 00:17:41.335 ] 00:17:41.335 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:41.335 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:41.335 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:41.593 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:41.593 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:41.593 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:41.852 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:41.852 18:49:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f5da52fb-8edc-45e2-945d-e2ec77bf3157 00:17:42.111 18:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c10929e-3b1d-487f-a3d9-351031dcdd39 00:17:42.370 18:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:42.629 18:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:42.629 00:17:42.629 real 0m19.248s 00:17:42.629 user 0m48.666s 00:17:42.630 sys 0m4.758s 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:42.630 ************************************ 00:17:42.630 END TEST lvs_grow_dirty 00:17:42.630 ************************************ 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:42.630 nvmf_trace.0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.630 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.887 rmmod nvme_tcp 00:17:42.887 rmmod nvme_fabrics 00:17:42.887 rmmod nvme_keyring 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3580861 ']' 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3580861 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3580861 ']' 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3580861 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3580861 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3580861' 00:17:42.887 killing process with pid 3580861 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3580861 00:17:42.887 18:49:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3580861 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.144 18:49:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.046 18:49:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.046 00:17:45.046 real 0m42.223s 00:17:45.046 user 1m11.581s 00:17:45.046 sys 0m8.549s 00:17:45.046 18:49:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.046 18:49:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:45.046 ************************************ 00:17:45.046 END TEST nvmf_lvs_grow 00:17:45.046 ************************************ 00:17:45.046 18:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.046 18:49:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:45.046 18:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.046 18:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.046 18:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.046 ************************************ 00:17:45.046 START TEST nvmf_bdev_io_wait 00:17:45.046 ************************************ 00:17:45.046 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:45.305 * Looking for test storage... 00:17:45.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.305 18:49:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:47.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:47.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:47.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:47.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:17:47.207 00:17:47.207 --- 10.0.0.2 ping statistics --- 00:17:47.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.207 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:17:47.207 00:17:47.207 --- 10.0.0.1 ping statistics --- 00:17:47.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.207 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.207 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3583378 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3583378 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3583378 ']' 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 [2024-07-14 18:49:35.488006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:47.466 [2024-07-14 18:49:35.488089] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.466 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.466 [2024-07-14 18:49:35.558483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.466 [2024-07-14 18:49:35.653607] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.466 [2024-07-14 18:49:35.653669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.466 [2024-07-14 18:49:35.653686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.466 [2024-07-14 18:49:35.653699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.466 [2024-07-14 18:49:35.653710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.466 [2024-07-14 18:49:35.653794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.466 [2024-07-14 18:49:35.653848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.466 [2024-07-14 18:49:35.653903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.466 [2024-07-14 18:49:35.653907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.466 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.724 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.725 [2024-07-14 18:49:35.785874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.725 Malloc0 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:47.725 [2024-07-14 18:49:35.844286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3583412 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3583414 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.725 { 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme$subsystem", 00:17:47.725 "trtype": "$TEST_TRANSPORT", 00:17:47.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "$NVMF_PORT", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.725 "hdgst": ${hdgst:-false}, 00:17:47.725 "ddgst": ${ddgst:-false} 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 } 00:17:47.725 EOF 00:17:47.725 )") 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3583416 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.725 { 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme$subsystem", 00:17:47.725 "trtype": "$TEST_TRANSPORT", 00:17:47.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "$NVMF_PORT", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.725 "hdgst": ${hdgst:-false}, 00:17:47.725 "ddgst": ${ddgst:-false} 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 } 00:17:47.725 EOF 00:17:47.725 )") 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3583419 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.725 { 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme$subsystem", 00:17:47.725 "trtype": "$TEST_TRANSPORT", 00:17:47.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "$NVMF_PORT", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.725 "hdgst": ${hdgst:-false}, 00:17:47.725 "ddgst": ${ddgst:-false} 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 } 00:17:47.725 EOF 00:17:47.725 )") 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:47.725 { 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme$subsystem", 00:17:47.725 "trtype": "$TEST_TRANSPORT", 00:17:47.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "$NVMF_PORT", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:47.725 "hdgst": ${hdgst:-false}, 00:17:47.725 "ddgst": ${ddgst:-false} 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 } 00:17:47.725 EOF 00:17:47.725 )") 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3583412 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme1", 00:17:47.725 "trtype": "tcp", 00:17:47.725 "traddr": "10.0.0.2", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "4420", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.725 "hdgst": false, 00:17:47.725 "ddgst": false 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 }' 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme1", 00:17:47.725 "trtype": "tcp", 00:17:47.725 "traddr": "10.0.0.2", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "4420", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.725 "hdgst": false, 00:17:47.725 "ddgst": false 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 }' 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme1", 00:17:47.725 "trtype": "tcp", 00:17:47.725 "traddr": "10.0.0.2", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "4420", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.725 "hdgst": false, 00:17:47.725 "ddgst": false 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 }' 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:47.725 18:49:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:47.725 "params": { 00:17:47.725 "name": "Nvme1", 00:17:47.725 "trtype": "tcp", 00:17:47.725 "traddr": "10.0.0.2", 00:17:47.725 "adrfam": "ipv4", 00:17:47.725 "trsvcid": "4420", 00:17:47.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.725 "hdgst": false, 00:17:47.725 "ddgst": false 00:17:47.725 }, 00:17:47.725 "method": "bdev_nvme_attach_controller" 00:17:47.725 }' 00:17:47.725 [2024-07-14 18:49:35.890826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:47.725 [2024-07-14 18:49:35.890827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:47.726 [2024-07-14 18:49:35.890937] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 18:49:35.890937] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:47.726 --proc-type=auto ] 00:17:47.726 [2024-07-14 18:49:35.891829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:47.726 [2024-07-14 18:49:35.891827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:47.726 [2024-07-14 18:49:35.891928] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 18:49:35.891929] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:47.726 --proc-type=auto ] 00:17:47.726 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.984 [2024-07-14 18:49:36.067984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.984 [2024-07-14 18:49:36.143176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.984 [2024-07-14 18:49:36.168295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.242 [2024-07-14 18:49:36.239281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:48.242 [2024-07-14 18:49:36.255612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.242 [2024-07-14 18:49:36.312730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.242 [2024-07-14 18:49:36.328462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:48.242 [2024-07-14 18:49:36.382890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:48.500 Running I/O for 1 seconds... 00:17:48.500 Running I/O for 1 seconds... 00:17:48.500 Running I/O for 1 seconds... 00:17:48.500 Running I/O for 1 seconds... 00:17:49.436 00:17:49.436 Latency(us) 00:17:49.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.436 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:49.436 Nvme1n1 : 1.00 200131.76 781.76 0.00 0.00 637.32 268.52 801.00 00:17:49.436 =================================================================================================================== 00:17:49.436 Total : 200131.76 781.76 0.00 0.00 637.32 268.52 801.00 00:17:49.436 00:17:49.436 Latency(us) 00:17:49.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.436 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:49.436 Nvme1n1 : 1.02 6524.33 25.49 0.00 0.00 19386.04 7475.96 28156.21 00:17:49.436 =================================================================================================================== 00:17:49.436 Total : 6524.33 25.49 0.00 0.00 19386.04 7475.96 28156.21 00:17:49.436 00:17:49.436 Latency(us) 00:17:49.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.436 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:49.436 Nvme1n1 : 1.01 10456.90 40.85 0.00 0.00 12192.56 6747.78 21942.42 00:17:49.436 =================================================================================================================== 00:17:49.436 Total : 10456.90 40.85 0.00 0.00 12192.56 6747.78 21942.42 00:17:49.436 00:17:49.436 Latency(us) 00:17:49.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.436 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:49.437 Nvme1n1 : 1.01 6239.80 24.37 0.00 0.00 20438.20 6747.78 46215.02 00:17:49.437 =================================================================================================================== 00:17:49.437 Total : 6239.80 24.37 0.00 0.00 20438.20 6747.78 46215.02 00:17:49.694 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3583414 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3583416 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3583419 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.952 rmmod nvme_tcp 00:17:49.952 rmmod nvme_fabrics 00:17:49.952 rmmod nvme_keyring 00:17:49.952 18:49:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3583378 ']' 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3583378 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3583378 ']' 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3583378 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583378 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583378' 00:17:49.952 killing process with pid 3583378 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3583378 00:17:49.952 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3583378 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.211 18:49:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.172 18:49:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.172 00:17:52.172 real 0m7.052s 00:17:52.172 user 0m16.192s 00:17:52.172 sys 0m3.396s 00:17:52.172 18:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:52.172 18:49:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.172 ************************************ 00:17:52.172 END TEST nvmf_bdev_io_wait 00:17:52.172 ************************************ 00:17:52.172 18:49:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:52.172 18:49:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:52.172 18:49:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:52.172 18:49:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.172 18:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.172 ************************************ 00:17:52.172 START TEST nvmf_queue_depth 00:17:52.172 ************************************ 00:17:52.172 18:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:52.430 * Looking for test storage... 00:17:52.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.430 18:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.431 18:49:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.330 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.331 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.331 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.331 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.331 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:17:54.331 00:17:54.331 --- 10.0.0.2 ping statistics --- 00:17:54.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.331 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:17:54.331 00:17:54.331 --- 10.0.0.1 ping statistics --- 00:17:54.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.331 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3585630 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3585630 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3585630 ']' 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.331 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.331 [2024-07-14 18:49:42.539760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.331 [2024-07-14 18:49:42.539833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.589 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.590 [2024-07-14 18:49:42.604399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.590 [2024-07-14 18:49:42.687027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.590 [2024-07-14 18:49:42.687079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.590 [2024-07-14 18:49:42.687093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.590 [2024-07-14 18:49:42.687105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.590 [2024-07-14 18:49:42.687115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.590 [2024-07-14 18:49:42.687142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.590 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.590 [2024-07-14 18:49:42.811847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.848 Malloc0 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.848 [2024-07-14 18:49:42.870562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3585659 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3585659 /var/tmp/bdevperf.sock 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3585659 ']' 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.848 18:49:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:54.848 [2024-07-14 18:49:42.917623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:54.848 [2024-07-14 18:49:42.917696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3585659 ] 00:17:54.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.848 [2024-07-14 18:49:42.980228] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.848 [2024-07-14 18:49:43.070448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:55.118 NVMe0n1 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.118 18:49:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.379 Running I/O for 10 seconds... 00:18:05.348 00:18:05.348 Latency(us) 00:18:05.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.348 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:05.348 Verification LBA range: start 0x0 length 0x4000 00:18:05.348 NVMe0n1 : 10.09 8407.38 32.84 0.00 0.00 121294.25 24272.59 74565.40 00:18:05.348 =================================================================================================================== 00:18:05.348 Total : 8407.38 32.84 0.00 0.00 121294.25 24272.59 74565.40 00:18:05.348 0 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3585659 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3585659 ']' 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3585659 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.348 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3585659 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3585659' 00:18:05.605 killing process with pid 3585659 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3585659 00:18:05.605 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.605 00:18:05.605 Latency(us) 00:18:05.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.605 =================================================================================================================== 00:18:05.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3585659 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.605 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.605 rmmod nvme_tcp 00:18:05.862 rmmod nvme_fabrics 00:18:05.862 rmmod nvme_keyring 00:18:05.862 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.862 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:05.862 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:05.862 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3585630 ']' 00:18:05.862 18:49:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3585630 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3585630 ']' 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3585630 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3585630 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3585630' 00:18:05.863 killing process with pid 3585630 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3585630 00:18:05.863 18:49:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3585630 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.120 18:49:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.022 18:49:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.022 00:18:08.022 real 0m15.828s 00:18:08.022 user 0m22.418s 00:18:08.022 sys 0m2.900s 00:18:08.022 18:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.022 18:49:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:08.022 ************************************ 00:18:08.022 END TEST nvmf_queue_depth 00:18:08.022 ************************************ 00:18:08.022 18:49:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.022 18:49:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:08.022 18:49:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.022 18:49:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.022 18:49:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.281 ************************************ 00:18:08.281 START TEST nvmf_target_multipath 00:18:08.281 ************************************ 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:08.281 * Looking for test storage... 00:18:08.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.281 18:49:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.182 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.183 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:18:10.440 00:18:10.440 --- 10.0.0.2 ping statistics --- 00:18:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.440 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:18:10.440 00:18:10.440 --- 10.0.0.1 ping statistics --- 00:18:10.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.440 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:10.440 only one NIC for nvmf test 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.440 rmmod nvme_tcp 00:18:10.440 rmmod nvme_fabrics 00:18:10.440 rmmod nvme_keyring 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.440 18:49:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.341 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:12.599 00:18:12.599 real 0m4.341s 00:18:12.599 user 0m0.808s 00:18:12.599 sys 0m1.512s 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:12.599 18:50:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:12.599 ************************************ 00:18:12.599 END TEST nvmf_target_multipath 00:18:12.599 ************************************ 00:18:12.599 18:50:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:12.599 18:50:00 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:12.599 18:50:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:12.599 18:50:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.599 18:50:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.599 ************************************ 00:18:12.599 START TEST nvmf_zcopy 00:18:12.599 ************************************ 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:12.599 * Looking for test storage... 00:18:12.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.599 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.600 18:50:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:14.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:14.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.500 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:14.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:14.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:14.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:18:14.501 00:18:14.501 --- 10.0.0.2 ping statistics --- 00:18:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.501 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:18:14.501 00:18:14.501 --- 10.0.0.1 ping statistics --- 00:18:14.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.501 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.501 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3590813 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3590813 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3590813 ']' 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.759 18:50:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:14.759 [2024-07-14 18:50:02.778150] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:14.759 [2024-07-14 18:50:02.778261] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.759 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.759 [2024-07-14 18:50:02.848361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.759 [2024-07-14 18:50:02.944660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.759 [2024-07-14 18:50:02.944728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.759 [2024-07-14 18:50:02.944743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.759 [2024-07-14 18:50:02.944756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.759 [2024-07-14 18:50:02.944768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.759 [2024-07-14 18:50:02.944810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 [2024-07-14 18:50:03.092117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 [2024-07-14 18:50:03.108340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 malloc0 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:15.018 { 00:18:15.018 "params": { 00:18:15.018 "name": "Nvme$subsystem", 00:18:15.018 "trtype": "$TEST_TRANSPORT", 00:18:15.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:15.018 "adrfam": "ipv4", 00:18:15.018 "trsvcid": "$NVMF_PORT", 00:18:15.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:15.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:15.018 "hdgst": ${hdgst:-false}, 00:18:15.018 "ddgst": ${ddgst:-false} 00:18:15.018 }, 00:18:15.018 "method": "bdev_nvme_attach_controller" 00:18:15.018 } 00:18:15.018 EOF 00:18:15.018 )") 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:15.018 18:50:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:15.018 "params": { 00:18:15.018 "name": "Nvme1", 00:18:15.018 "trtype": "tcp", 00:18:15.018 "traddr": "10.0.0.2", 00:18:15.018 "adrfam": "ipv4", 00:18:15.018 "trsvcid": "4420", 00:18:15.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.018 "hdgst": false, 00:18:15.018 "ddgst": false 00:18:15.018 }, 00:18:15.018 "method": "bdev_nvme_attach_controller" 00:18:15.018 }' 00:18:15.018 [2024-07-14 18:50:03.192846] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:15.018 [2024-07-14 18:50:03.192939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590846 ] 00:18:15.018 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.276 [2024-07-14 18:50:03.264109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.276 [2024-07-14 18:50:03.360420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.533 Running I/O for 10 seconds... 00:18:25.545 00:18:25.545 Latency(us) 00:18:25.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:25.545 Verification LBA range: start 0x0 length 0x1000 00:18:25.545 Nvme1n1 : 10.02 5232.76 40.88 0.00 0.00 24393.64 958.77 33787.45 00:18:25.545 =================================================================================================================== 00:18:25.545 Total : 5232.76 40.88 0.00 0.00 24393.64 958.77 33787.45 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3592032 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:25.803 { 00:18:25.803 "params": { 00:18:25.803 "name": "Nvme$subsystem", 00:18:25.803 "trtype": "$TEST_TRANSPORT", 00:18:25.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:25.803 "adrfam": "ipv4", 00:18:25.803 "trsvcid": "$NVMF_PORT", 00:18:25.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:25.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:25.803 "hdgst": ${hdgst:-false}, 00:18:25.803 "ddgst": ${ddgst:-false} 00:18:25.803 }, 00:18:25.803 "method": "bdev_nvme_attach_controller" 00:18:25.803 } 00:18:25.803 EOF 00:18:25.803 )") 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:25.803 [2024-07-14 18:50:13.873862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.803 [2024-07-14 18:50:13.873944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:25.803 18:50:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:25.803 "params": { 00:18:25.803 "name": "Nvme1", 00:18:25.803 "trtype": "tcp", 00:18:25.803 "traddr": "10.0.0.2", 00:18:25.803 "adrfam": "ipv4", 00:18:25.803 "trsvcid": "4420", 00:18:25.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.803 "hdgst": false, 00:18:25.803 "ddgst": false 00:18:25.803 }, 00:18:25.803 "method": "bdev_nvme_attach_controller" 00:18:25.803 }' 00:18:25.804 [2024-07-14 18:50:13.881825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.881852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.889837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.889860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.897852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.897873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.905894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.905915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.910463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:25.804 [2024-07-14 18:50:13.910536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3592032 ] 00:18:25.804 [2024-07-14 18:50:13.913913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.913949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.921936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.921958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.929968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.929990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.937986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.938006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.804 [2024-07-14 18:50:13.946009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.946031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.954032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.954054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.962052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.962074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.970076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.970098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.976532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.804 [2024-07-14 18:50:13.978098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.978125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.986179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.986219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:13.994168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:13.994199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:14.002194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:14.002220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:14.010203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:14.010228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:14.018228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:14.018254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.804 [2024-07-14 18:50:14.026245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.804 [2024-07-14 18:50:14.026269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.034292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.034330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.042285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.042311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.050305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.050330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.058328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.058353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.066350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.066375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.070901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.062 [2024-07-14 18:50:14.074374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.074399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.082394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.082419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.090456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.090493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.098452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.098489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.106476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.106511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.114537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.114581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.122540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.122580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.130557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.130598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.138552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.138579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.062 [2024-07-14 18:50:14.146602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.062 [2024-07-14 18:50:14.146640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.154624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.154676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.162626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.162656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.170638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.170662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.178684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.178713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.186689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.186716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.194717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.194744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.202741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.202768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.210761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.210786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.218784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.218808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.226807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.226830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.234829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.234853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.242855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.242890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.250890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.250930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.258912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.258951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.266944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.266965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.274966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.274987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.063 [2024-07-14 18:50:14.282981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.063 [2024-07-14 18:50:14.283002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.291020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.291042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.299013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.299035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.307029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.307050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.315049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.315070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.323070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.323090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.331097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.331118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.339119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.339140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.347139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.347179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.355181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.355205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.363207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.363231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.371233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.371257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.379259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.379283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.387275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.387301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.395303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.395328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.403331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.403360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 Running I/O for 5 seconds... 00:18:26.321 [2024-07-14 18:50:14.411350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.411375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.423674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.423705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.433520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.433551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.446225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.446255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.458125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.458152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.469750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.469780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.482972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.482999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.493120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.493146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.505029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.505068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.516201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.516231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.529573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.529604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.321 [2024-07-14 18:50:14.540294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.321 [2024-07-14 18:50:14.540324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.551435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.551466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.562977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.563004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.574626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.574656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.586542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.586572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.599961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.599988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.610844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.610874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.622284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.622315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.633685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.633715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.645015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.645042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.656558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.656595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.668095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.668122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.681377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.681408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.691823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.579 [2024-07-14 18:50:14.691853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.579 [2024-07-14 18:50:14.703438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.703467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.715042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.715084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.726471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.726501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.738271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.738302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.749441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.749470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.760541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.760570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.772045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.772072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.783707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.783737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.580 [2024-07-14 18:50:14.796823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.580 [2024-07-14 18:50:14.796852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.807366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.807396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.818731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.818761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.829720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.829750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.841298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.841328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.853013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.853040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.864674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.864704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.875937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.875973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.887467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.887498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.899032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.899059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.910226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.910256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.921798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.921828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.933063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.933090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.944326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.944356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.956051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.956078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.967225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.967255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.978524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.978555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:14.989469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:14.989499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:15.000940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:15.000967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:15.012278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.838 [2024-07-14 18:50:15.012309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.838 [2024-07-14 18:50:15.023491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.839 [2024-07-14 18:50:15.023521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.839 [2024-07-14 18:50:15.034698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.839 [2024-07-14 18:50:15.034728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.839 [2024-07-14 18:50:15.047806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.839 [2024-07-14 18:50:15.047836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.839 [2024-07-14 18:50:15.058336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.839 [2024-07-14 18:50:15.058366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.069320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.069351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.082254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.082285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.092604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.092643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.104048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.104076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.115603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.115633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.128805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.128836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.139705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.139736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.150960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.150988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.162264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.162295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.173411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.173443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.186496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.186527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.197073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.197101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.208842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.208872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.220305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.220335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.231462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.231492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.242525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.242555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.097 [2024-07-14 18:50:15.255825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.097 [2024-07-14 18:50:15.255855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.267061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.267089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.278227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.278257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.289184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.289214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.300006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.300033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.311092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.311129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.098 [2024-07-14 18:50:15.322406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.098 [2024-07-14 18:50:15.322436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.334023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.334050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.347297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.347327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.357833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.357863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.369327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.369357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.380605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.380635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.391866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.391919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.403119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.403146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.414127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.414155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.425601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.425631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.436508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.436538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.447927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.447953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.461100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.461128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.471974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.472001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.483200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.483230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.494374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.494404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.505976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.506003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.517818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.517847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.530986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.531012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.541822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.541852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.553530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.553560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.564734] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.564764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.357 [2024-07-14 18:50:15.575997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.357 [2024-07-14 18:50:15.576024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.588871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.588931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.598911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.598962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.611141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.611185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.622898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.622941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.634233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.634263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.645740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.645769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.656600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.656629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.668087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.668114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.679453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.679483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.690840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.690868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.615 [2024-07-14 18:50:15.702208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.615 [2024-07-14 18:50:15.702239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.713298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.713328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.724663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.724693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.736296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.736326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.749629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.749657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.759733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.759760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.770569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.770596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.781144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.781171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.791908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.791934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.804256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.804282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.814316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.814342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.824777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.824818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.616 [2024-07-14 18:50:15.835301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.616 [2024-07-14 18:50:15.835328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.845957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.845988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.856238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.856264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.868716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.868743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.878993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.879020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.889603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.889630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.902212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.902240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.911967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.911994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.923255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.923282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.933973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.934001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.944824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.944851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.957645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.957672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.968001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.968027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.978569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.978596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:15.990953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:15.990980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.000947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.000973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.011083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.011110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.021386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.021413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.031577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.031604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.041858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.041893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.052176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.052203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.062713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.062740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.072894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.072921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.083355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.083381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.874 [2024-07-14 18:50:16.093696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.874 [2024-07-14 18:50:16.093723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.104036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.104063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.114826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.114852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.125258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.125285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.135486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.135513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.145696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.145723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.155501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.155527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.165612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.165638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.175819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.175847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.186394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.186422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.198680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.198707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.208529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.208556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.218894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.218921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.229304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.229331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.239984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.240011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.250310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.250337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.261085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.261113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.271713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.271741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.284343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.284371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.294512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.294539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.304975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.305002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.315281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.315309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.325862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.325899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.336204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.336231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.133 [2024-07-14 18:50:16.348598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.133 [2024-07-14 18:50:16.348636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.358511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.358538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.368914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.368941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.379330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.379357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.392055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.392083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.404116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.404143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.414384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.414414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.425764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.425794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.436830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.436860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.448367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.391 [2024-07-14 18:50:16.448396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.391 [2024-07-14 18:50:16.459741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.459771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.471148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.471190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.482623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.482653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.493809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.493839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.504860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.504899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.516047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.516074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.527017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.527044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.538235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.538265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.551814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.551845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.562715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.562755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.574117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.574143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.585819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.585850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.597584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.597614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.392 [2024-07-14 18:50:16.609238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.392 [2024-07-14 18:50:16.609281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.650 [2024-07-14 18:50:16.619979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.650 [2024-07-14 18:50:16.620006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.650 [2024-07-14 18:50:16.631173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.650 [2024-07-14 18:50:16.631217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.650 [2024-07-14 18:50:16.644456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.644486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.655351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.655381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.666211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.666237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.677426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.677455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.689283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.689315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.700847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.700885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.712527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.712557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.724422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.724452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.735805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.735834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.749516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.749545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.760146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.760173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.771537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.771567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.783077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.783113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.794333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.794363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.805949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.805976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.816961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.816987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.828485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.828515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.839556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.839586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.851452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.851482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.862591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.862620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.651 [2024-07-14 18:50:16.874149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.651 [2024-07-14 18:50:16.874176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.885220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.885249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.896625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.896655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.908040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.908067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.919214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.919244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.930194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.930224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.941604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.941634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.952898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.952941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.963806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.963835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.975313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.975343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:16.988302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:16.988332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:17.000785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:17.000830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:17.011437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:17.011472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.909 [2024-07-14 18:50:17.022998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.909 [2024-07-14 18:50:17.023025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.034467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.034497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.045653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.045682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.057465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.057495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.068626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.068655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.080423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.080453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.092149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.092176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.104068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.104097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.115503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.115533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.910 [2024-07-14 18:50:17.126997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.910 [2024-07-14 18:50:17.127024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.138287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.138317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.149701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.149731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.161609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.161640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.173084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.173110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.184419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.184449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.196095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.196123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.207502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.207533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.218984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.219011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.230148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.230174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.243302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.243332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.253813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.253842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.265731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.265761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.277866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.277903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.289342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.289372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.300562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.300592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.311682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.311712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.324800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.324830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.335900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.335944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.347842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.347872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.359311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.359342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.371998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.372025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.167 [2024-07-14 18:50:17.382381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.167 [2024-07-14 18:50:17.382411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.394136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.394164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.405514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.405545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.416395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.416427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.427639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.427670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.438801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.438832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.450089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.450117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.463321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.463351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.474546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.474577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.485934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.485961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.497445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.497475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.508965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.508992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.520608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.520638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.532021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.532049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.543207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.543237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.554598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.554629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.565739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.565768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.577063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.577089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.588405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.588434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.600020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.600047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.611948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.611975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.623364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.623394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.634678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.634708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.426 [2024-07-14 18:50:17.645935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.426 [2024-07-14 18:50:17.645962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.657002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.657028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.668188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.668218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.680466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.680496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.691787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.691817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.702889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.702934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.714361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.714392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.726084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.726111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.737687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.737717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.748663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.748694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.760070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.684 [2024-07-14 18:50:17.760098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.684 [2024-07-14 18:50:17.771384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.771415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.782730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.782761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.794483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.794512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.808194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.808221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.819552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.819581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.830981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.831008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.841824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.841854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.853348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.853378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.864968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.864995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.876544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.876574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.888459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.888489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.685 [2024-07-14 18:50:17.900275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.685 [2024-07-14 18:50:17.900306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.911787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.911816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.925529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.925558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.936370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.936400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.947410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.947441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.959199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.959229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.970659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.970690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.981972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.982000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:17.993931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:17.993958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:18.005689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:18.005719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:18.017633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:18.017663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:18.029070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:18.029097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.943 [2024-07-14 18:50:18.040455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.943 [2024-07-14 18:50:18.040485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.052134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.052161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.063954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.063981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.077255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.077285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.088052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.088087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.099149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.099194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.110869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.110921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.122135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.122161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.135926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.135953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.147271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.147301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.944 [2024-07-14 18:50:18.158732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.944 [2024-07-14 18:50:18.158761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.170367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.170397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.181615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.181645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.192543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.192573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.203979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.204008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.215027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.215055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.226293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.226324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.239583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.239613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.250776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.250806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.262426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.262456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.273950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.273977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.287748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.287778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.298867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.298920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.310089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.310123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.321242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.321272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.332609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.332638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.345776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.345806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.356864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.356902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.367946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.367974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.381006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.381034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.390861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.390898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.402731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.402761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.413829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.413859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.202 [2024-07-14 18:50:18.425141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.202 [2024-07-14 18:50:18.425168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.438458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.438487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.449545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.449574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.460651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.460681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.473894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.473942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.484428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.484458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.496361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.496392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.507840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.507870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.519022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.519050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.529936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.529971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.541176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.541203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.552657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.552688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.563593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.563632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.574811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.574841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.588347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.588378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.599499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.599529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.610724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.610755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.623715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.623745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.634341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.634371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.645800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.645831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.656981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.657008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.668087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.668116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.461 [2024-07-14 18:50:18.681386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.461 [2024-07-14 18:50:18.681417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.692405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.692435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.703618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.703648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.716417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.716447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.727244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.727275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.739257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.739287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.750329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.750369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.761730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.761759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.773353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.773383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.784780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.784809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.800945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.800975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.812004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.812030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.823739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.823769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.835517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.835547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.849135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.849162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.860656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.860686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.872410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.872440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.883997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.884025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.895164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.895204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.906427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.906459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.917665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.917695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.929161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.929205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.720 [2024-07-14 18:50:18.942360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.720 [2024-07-14 18:50:18.942390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:18.953094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:18.953121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:18.964831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:18.964861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:18.976276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:18.976306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:18.987612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:18.987643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:18.999310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:18.999341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.012646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.012676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.023345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.023375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.034825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.034856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.046235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.046266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.059307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.059337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.070133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.070160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.082056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.082083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.093677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.093707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.107210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.107240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.118288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.118317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.129478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.129507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.140239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.140268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.151619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.151648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.163162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.163188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.174747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.174776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.186271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.186301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.978 [2024-07-14 18:50:19.197990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.978 [2024-07-14 18:50:19.198017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.209043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.209070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.220412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.220442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.231742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.231772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.243132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.243175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.254921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.254948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.266359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.266389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.279716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.279746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.290132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.290159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.302339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.302369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.313942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.313968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.325277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.325307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.336613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.336642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.347554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.347584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.358841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.358871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.370288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.370318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.381864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.381920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.393593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.393623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.404646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.404676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.416019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.416046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.427908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.427951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 [2024-07-14 18:50:19.434776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.434804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.236 00:18:31.236 Latency(us) 00:18:31.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.236 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:31.236 Nvme1n1 : 5.01 11273.26 88.07 0.00 0.00 11338.55 4975.88 19709.35 00:18:31.236 =================================================================================================================== 00:18:31.236 Total : 11273.26 88.07 0.00 0.00 11338.55 4975.88 19709.35 00:18:31.236 [2024-07-14 18:50:19.442798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.236 [2024-07-14 18:50:19.442825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.237 [2024-07-14 18:50:19.450816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.237 [2024-07-14 18:50:19.450844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.237 [2024-07-14 18:50:19.458893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.237 [2024-07-14 18:50:19.458939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.494 [2024-07-14 18:50:19.466914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.494 [2024-07-14 18:50:19.466960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.494 [2024-07-14 18:50:19.474954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.494 [2024-07-14 18:50:19.475003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.494 [2024-07-14 18:50:19.482941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.482987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.490977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.491026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.499010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.499057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.507039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.507088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.515040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.515084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.523057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.523104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.531076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.531123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.539105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.539165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.547127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.547172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.555209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.555254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.563209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.563257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.571217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.571256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.579266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.579309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.587302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.587348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.595325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.595371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.603316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.603343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.611332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.611365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.619383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.619428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.627412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.627459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.635391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.635415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.643413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.643438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 [2024-07-14 18:50:19.651435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.495 [2024-07-14 18:50:19.651459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3592032) - No such process 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3592032 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:31.495 delay0 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.495 18:50:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:31.495 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.753 [2024-07-14 18:50:19.773830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:39.857 Initializing NVMe Controllers 00:18:39.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:39.857 Initialization complete. Launching workers. 00:18:39.857 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 15251 00:18:39.857 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15419, failed to submit 92 00:18:39.857 success 15314, unsuccess 105, failed 0 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.857 rmmod nvme_tcp 00:18:39.857 rmmod nvme_fabrics 00:18:39.857 rmmod nvme_keyring 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3590813 ']' 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3590813 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3590813 ']' 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3590813 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3590813 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3590813' 00:18:39.857 killing process with pid 3590813 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3590813 00:18:39.857 18:50:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3590813 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.857 18:50:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.278 18:50:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.278 00:18:41.278 real 0m28.601s 00:18:41.278 user 0m40.387s 00:18:41.278 sys 0m9.773s 00:18:41.278 18:50:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.278 18:50:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:41.278 ************************************ 00:18:41.278 END TEST nvmf_zcopy 00:18:41.278 ************************************ 00:18:41.278 18:50:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:41.278 18:50:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:41.278 18:50:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.278 18:50:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.278 18:50:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:41.278 ************************************ 00:18:41.278 START TEST nvmf_nmic 00:18:41.278 ************************************ 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:41.278 * Looking for test storage... 00:18:41.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.278 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.279 18:50:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.180 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.181 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:18:43.439 00:18:43.439 --- 10.0.0.2 ping statistics --- 00:18:43.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.439 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:18:43.439 00:18:43.439 --- 10.0.0.1 ping statistics --- 00:18:43.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.439 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3595538 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3595538 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3595538 ']' 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.439 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.439 [2024-07-14 18:50:31.493512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:43.439 [2024-07-14 18:50:31.493596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.439 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.439 [2024-07-14 18:50:31.562961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.440 [2024-07-14 18:50:31.658243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.440 [2024-07-14 18:50:31.658308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.440 [2024-07-14 18:50:31.658324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.440 [2024-07-14 18:50:31.658339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.440 [2024-07-14 18:50:31.658350] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.440 [2024-07-14 18:50:31.658439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.440 [2024-07-14 18:50:31.658493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.440 [2024-07-14 18:50:31.658546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.440 [2024-07-14 18:50:31.658548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 [2024-07-14 18:50:31.834938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 Malloc0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 [2024-07-14 18:50:31.888396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:43.698 test case1: single bdev can't be used in multiple subsystems 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 [2024-07-14 18:50:31.912215] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:43.698 [2024-07-14 18:50:31.912244] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:43.698 [2024-07-14 18:50:31.912275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.698 request: 00:18:43.698 { 00:18:43.698 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:43.698 "namespace": { 00:18:43.698 "bdev_name": "Malloc0", 00:18:43.698 "no_auto_visible": false 00:18:43.698 }, 00:18:43.698 "method": "nvmf_subsystem_add_ns", 00:18:43.698 "req_id": 1 00:18:43.698 } 00:18:43.698 Got JSON-RPC error response 00:18:43.698 response: 00:18:43.698 { 00:18:43.698 "code": -32602, 00:18:43.698 "message": "Invalid parameters" 00:18:43.698 } 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:43.698 Adding namespace failed - expected result. 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:43.698 test case2: host connect to nvmf target in multiple paths 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.698 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:43.698 [2024-07-14 18:50:31.920334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:43.956 18:50:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.956 18:50:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:44.521 18:50:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:45.087 18:50:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:45.087 18:50:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:45.087 18:50:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.087 18:50:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:45.087 18:50:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:46.982 18:50:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:47.240 [global] 00:18:47.240 thread=1 00:18:47.240 invalidate=1 00:18:47.240 rw=write 00:18:47.240 time_based=1 00:18:47.240 runtime=1 00:18:47.240 ioengine=libaio 00:18:47.240 direct=1 00:18:47.240 bs=4096 00:18:47.240 iodepth=1 00:18:47.240 norandommap=0 00:18:47.240 numjobs=1 00:18:47.240 00:18:47.240 verify_dump=1 00:18:47.240 verify_backlog=512 00:18:47.240 verify_state_save=0 00:18:47.240 do_verify=1 00:18:47.240 verify=crc32c-intel 00:18:47.240 [job0] 00:18:47.240 filename=/dev/nvme0n1 00:18:47.240 Could not set queue depth (nvme0n1) 00:18:47.240 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:47.240 fio-3.35 00:18:47.240 Starting 1 thread 00:18:48.612 00:18:48.612 job0: (groupid=0, jobs=1): err= 0: pid=3596057: Sun Jul 14 18:50:36 2024 00:18:48.612 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:18:48.612 slat (nsec): min=7390, max=48308, avg=25869.29, stdev=12699.05 00:18:48.612 clat (usec): min=40920, max=42318, avg=41887.94, stdev=318.57 00:18:48.612 lat (usec): min=40956, max=42326, avg=41913.81, stdev=317.01 00:18:48.612 clat percentiles (usec): 00:18:48.612 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:48.612 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:48.612 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:48.612 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:48.612 | 99.99th=[42206] 00:18:48.612 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:18:48.612 slat (usec): min=7, max=31064, avg=69.06, stdev=1372.52 00:18:48.612 clat (usec): min=163, max=335, avg=183.53, stdev=11.36 00:18:48.612 lat (usec): min=171, max=31300, avg=252.60, stdev=1374.88 00:18:48.612 clat percentiles (usec): 00:18:48.612 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:18:48.612 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:18:48.612 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:18:48.612 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 338], 99.95th=[ 338], 00:18:48.612 | 99.99th=[ 338] 00:18:48.612 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:48.612 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:48.612 lat (usec) : 250=95.87%, 500=0.19% 00:18:48.612 lat (msec) : 50=3.94% 00:18:48.612 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=2 00:18:48.612 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:48.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.612 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.612 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:48.612 00:18:48.612 Run status group 0 (all jobs): 00:18:48.612 READ: bw=83.1KiB/s (85.1kB/s), 83.1KiB/s-83.1KiB/s (85.1kB/s-85.1kB/s), io=84.0KiB (86.0kB), run=1011-1011msec 00:18:48.612 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:18:48.612 00:18:48.612 Disk stats (read/write): 00:18:48.612 nvme0n1: ios=70/512, merge=0/0, ticks=1434/87, in_queue=1521, util=98.60% 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.612 rmmod nvme_tcp 00:18:48.612 rmmod nvme_fabrics 00:18:48.612 rmmod nvme_keyring 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3595538 ']' 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3595538 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3595538 ']' 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3595538 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3595538 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3595538' 00:18:48.612 killing process with pid 3595538 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3595538 00:18:48.612 18:50:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3595538 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.872 18:50:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.417 18:50:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:51.417 00:18:51.417 real 0m9.786s 00:18:51.417 user 0m22.205s 00:18:51.417 sys 0m2.302s 00:18:51.417 18:50:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.417 18:50:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:51.417 ************************************ 00:18:51.417 END TEST nvmf_nmic 00:18:51.417 ************************************ 00:18:51.417 18:50:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:51.417 18:50:39 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:51.417 18:50:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:51.417 18:50:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.417 18:50:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:51.417 ************************************ 00:18:51.417 START TEST nvmf_fio_target 00:18:51.417 ************************************ 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:51.417 * Looking for test storage... 00:18:51.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:51.417 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:51.418 18:50:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:53.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:53.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:53.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:53.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:18:53.319 00:18:53.319 --- 10.0.0.2 ping statistics --- 00:18:53.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.319 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:18:53.319 00:18:53.319 --- 10.0.0.1 ping statistics --- 00:18:53.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.319 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.319 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3598121 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3598121 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3598121 ']' 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.320 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.320 [2024-07-14 18:50:41.358162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:53.320 [2024-07-14 18:50:41.358248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.320 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.320 [2024-07-14 18:50:41.427974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.320 [2024-07-14 18:50:41.525241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.320 [2024-07-14 18:50:41.525298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.320 [2024-07-14 18:50:41.525315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.320 [2024-07-14 18:50:41.525329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.320 [2024-07-14 18:50:41.525341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.320 [2024-07-14 18:50:41.525428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.320 [2024-07-14 18:50:41.525492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.320 [2024-07-14 18:50:41.525549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.320 [2024-07-14 18:50:41.525546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.578 18:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:53.835 [2024-07-14 18:50:41.951799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.835 18:50:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.093 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:54.093 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.350 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:54.350 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.607 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:54.607 18:50:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:54.865 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:54.865 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:55.121 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.378 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:55.378 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.635 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:55.635 18:50:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:55.893 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:55.893 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:56.150 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.408 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:56.408 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.665 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:56.665 18:50:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:56.923 18:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.181 [2024-07-14 18:50:45.321606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.181 18:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:57.439 18:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:57.697 18:50:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:58.632 18:50:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.530 18:50:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:00.531 18:50:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:00.531 [global] 00:19:00.531 thread=1 00:19:00.531 invalidate=1 00:19:00.531 rw=write 00:19:00.531 time_based=1 00:19:00.531 runtime=1 00:19:00.531 ioengine=libaio 00:19:00.531 direct=1 00:19:00.531 bs=4096 00:19:00.531 iodepth=1 00:19:00.531 norandommap=0 00:19:00.531 numjobs=1 00:19:00.531 00:19:00.531 verify_dump=1 00:19:00.531 verify_backlog=512 00:19:00.531 verify_state_save=0 00:19:00.531 do_verify=1 00:19:00.531 verify=crc32c-intel 00:19:00.531 [job0] 00:19:00.531 filename=/dev/nvme0n1 00:19:00.531 [job1] 00:19:00.531 filename=/dev/nvme0n2 00:19:00.531 [job2] 00:19:00.531 filename=/dev/nvme0n3 00:19:00.531 [job3] 00:19:00.531 filename=/dev/nvme0n4 00:19:00.531 Could not set queue depth (nvme0n1) 00:19:00.531 Could not set queue depth (nvme0n2) 00:19:00.531 Could not set queue depth (nvme0n3) 00:19:00.531 Could not set queue depth (nvme0n4) 00:19:00.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:00.531 fio-3.35 00:19:00.531 Starting 4 threads 00:19:01.905 00:19:01.905 job0: (groupid=0, jobs=1): err= 0: pid=3599185: Sun Jul 14 18:50:49 2024 00:19:01.905 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:01.905 slat (nsec): min=4302, max=67606, avg=12068.36, stdev=6065.14 00:19:01.905 clat (usec): min=199, max=41230, avg=398.79, stdev=2529.91 00:19:01.905 lat (usec): min=209, max=41242, avg=410.86, stdev=2530.27 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:19:01.905 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:19:01.905 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 306], 00:19:01.905 | 99.00th=[ 449], 99.50th=[ 510], 99.90th=[41157], 99.95th=[41157], 00:19:01.905 | 99.99th=[41157] 00:19:01.905 write: IOPS=1938, BW=7752KiB/s (7938kB/s)(7760KiB/1001msec); 0 zone resets 00:19:01.905 slat (nsec): min=5790, max=38230, avg=13391.44, stdev=4307.28 00:19:01.905 clat (usec): min=141, max=475, avg=170.01, stdev=14.15 00:19:01.905 lat (usec): min=149, max=496, avg=183.40, stdev=15.12 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:19:01.905 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:19:01.905 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 184], 95.00th=[ 192], 00:19:01.905 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 322], 99.95th=[ 474], 00:19:01.905 | 99.99th=[ 474] 00:19:01.905 bw ( KiB/s): min= 8192, max= 8192, per=34.78%, avg=8192.00, stdev= 0.00, samples=1 00:19:01.905 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:01.905 lat (usec) : 250=90.56%, 500=9.18%, 750=0.09% 00:19:01.905 lat (msec) : 50=0.17% 00:19:01.905 cpu : usr=3.10%, sys=4.20%, ctx=3476, majf=0, minf=2 00:19:01.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 issued rwts: total=1536,1940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.905 job1: (groupid=0, jobs=1): err= 0: pid=3599186: Sun Jul 14 18:50:49 2024 00:19:01.905 read: IOPS=1118, BW=4476KiB/s (4583kB/s)(4516KiB/1009msec) 00:19:01.905 slat (nsec): min=5728, max=74824, avg=18364.52, stdev=8306.11 00:19:01.905 clat (usec): min=213, max=41136, avg=490.88, stdev=2510.95 00:19:01.905 lat (usec): min=226, max=41187, avg=509.24, stdev=2511.65 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 253], 20.00th=[ 273], 00:19:01.905 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 330], 00:19:01.905 | 70.00th=[ 359], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 453], 00:19:01.905 | 99.00th=[ 486], 99.50th=[ 3359], 99.90th=[41157], 99.95th=[41157], 00:19:01.905 | 99.99th=[41157] 00:19:01.905 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:19:01.905 slat (usec): min=6, max=40696, avg=59.78, stdev=1180.77 00:19:01.905 clat (usec): min=146, max=460, avg=213.47, stdev=59.21 00:19:01.905 lat (usec): min=160, max=41074, avg=273.25, stdev=1187.27 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:19:01.905 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 198], 00:19:01.905 | 70.00th=[ 212], 80.00th=[ 247], 90.00th=[ 326], 95.00th=[ 347], 00:19:01.905 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 461], 99.95th=[ 461], 00:19:01.905 | 99.99th=[ 461] 00:19:01.905 bw ( KiB/s): min= 5464, max= 6824, per=26.08%, avg=6144.00, stdev=961.67, samples=2 00:19:01.905 iops : min= 1366, max= 1706, avg=1536.00, stdev=240.42, samples=2 00:19:01.905 lat (usec) : 250=50.17%, 500=49.57%, 750=0.04% 00:19:01.905 lat (msec) : 4=0.04%, 50=0.19% 00:19:01.905 cpu : usr=2.78%, sys=6.05%, ctx=2669, majf=0, minf=1 00:19:01.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 issued rwts: total=1129,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.905 job2: (groupid=0, jobs=1): err= 0: pid=3599189: Sun Jul 14 18:50:49 2024 00:19:01.905 read: IOPS=281, BW=1128KiB/s (1155kB/s)(1156KiB/1025msec) 00:19:01.905 slat (nsec): min=5194, max=49865, avg=24304.92, stdev=9990.35 00:19:01.905 clat (usec): min=226, max=42003, avg=3062.99, stdev=10174.78 00:19:01.905 lat (usec): min=237, max=42022, avg=3087.30, stdev=10175.10 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 229], 5.00th=[ 245], 10.00th=[ 265], 20.00th=[ 306], 00:19:01.905 | 30.00th=[ 334], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 396], 00:19:01.905 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 490], 95.00th=[41157], 00:19:01.905 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:01.905 | 99.99th=[42206] 00:19:01.905 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:01.905 slat (nsec): min=6425, max=56296, avg=15453.34, stdev=7647.23 00:19:01.905 clat (usec): min=159, max=3870, avg=234.64, stdev=164.89 00:19:01.905 lat (usec): min=175, max=3886, avg=250.10, stdev=164.93 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:19:01.905 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:19:01.905 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 285], 00:19:01.905 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 3884], 99.95th=[ 3884], 00:19:01.905 | 99.99th=[ 3884] 00:19:01.905 bw ( KiB/s): min= 4096, max= 4096, per=17.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:01.905 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:01.905 lat (usec) : 250=59.05%, 500=37.83%, 750=0.62% 00:19:01.905 lat (msec) : 4=0.12%, 50=2.37% 00:19:01.905 cpu : usr=0.49%, sys=1.76%, ctx=803, majf=0, minf=1 00:19:01.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 issued rwts: total=289,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.905 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.905 job3: (groupid=0, jobs=1): err= 0: pid=3599190: Sun Jul 14 18:50:49 2024 00:19:01.905 read: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec) 00:19:01.905 slat (nsec): min=4554, max=57726, avg=15343.45, stdev=7933.58 00:19:01.905 clat (usec): min=207, max=584, avg=314.13, stdev=69.40 00:19:01.905 lat (usec): min=212, max=593, avg=329.47, stdev=72.67 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:19:01.905 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 330], 00:19:01.905 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 433], 00:19:01.905 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 586], 00:19:01.905 | 99.99th=[ 586] 00:19:01.905 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:01.905 slat (nsec): min=5839, max=53985, avg=16857.36, stdev=5968.52 00:19:01.905 clat (usec): min=151, max=691, avg=215.85, stdev=33.32 00:19:01.905 lat (usec): min=158, max=711, avg=232.71, stdev=35.07 00:19:01.905 clat percentiles (usec): 00:19:01.905 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:19:01.905 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 227], 00:19:01.905 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 265], 00:19:01.905 | 99.00th=[ 334], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 396], 00:19:01.905 | 99.99th=[ 693] 00:19:01.905 bw ( KiB/s): min= 8192, max= 8192, per=34.78%, avg=8192.00, stdev= 0.00, samples=1 00:19:01.905 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:01.905 lat (usec) : 250=63.61%, 500=36.05%, 750=0.33% 00:19:01.905 cpu : usr=3.90%, sys=6.50%, ctx=3589, majf=0, minf=1 00:19:01.905 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:01.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:01.906 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:01.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:01.906 00:19:01.906 Run status group 0 (all jobs): 00:19:01.906 READ: bw=17.1MiB/s (18.0MB/s), 1128KiB/s-6158KiB/s (1155kB/s-6306kB/s), io=17.6MiB (18.4MB), run=1001-1025msec 00:19:01.906 WRITE: bw=23.0MiB/s (24.1MB/s), 1998KiB/s-8184KiB/s (2046kB/s-8380kB/s), io=23.6MiB (24.7MB), run=1001-1025msec 00:19:01.906 00:19:01.906 Disk stats (read/write): 00:19:01.906 nvme0n1: ios=1465/1536, merge=0/0, ticks=514/252, in_queue=766, util=86.67% 00:19:01.906 nvme0n2: ios=1049/1423, merge=0/0, ticks=1309/286, in_queue=1595, util=91.25% 00:19:01.906 nvme0n3: ios=307/512, merge=0/0, ticks=1574/108, in_queue=1682, util=93.42% 00:19:01.906 nvme0n4: ios=1544/1536, merge=0/0, ticks=524/316, in_queue=840, util=95.89% 00:19:01.906 18:50:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:01.906 [global] 00:19:01.906 thread=1 00:19:01.906 invalidate=1 00:19:01.906 rw=randwrite 00:19:01.906 time_based=1 00:19:01.906 runtime=1 00:19:01.906 ioengine=libaio 00:19:01.906 direct=1 00:19:01.906 bs=4096 00:19:01.906 iodepth=1 00:19:01.906 norandommap=0 00:19:01.906 numjobs=1 00:19:01.906 00:19:01.906 verify_dump=1 00:19:01.906 verify_backlog=512 00:19:01.906 verify_state_save=0 00:19:01.906 do_verify=1 00:19:01.906 verify=crc32c-intel 00:19:01.906 [job0] 00:19:01.906 filename=/dev/nvme0n1 00:19:01.906 [job1] 00:19:01.906 filename=/dev/nvme0n2 00:19:01.906 [job2] 00:19:01.906 filename=/dev/nvme0n3 00:19:01.906 [job3] 00:19:01.906 filename=/dev/nvme0n4 00:19:01.906 Could not set queue depth (nvme0n1) 00:19:01.906 Could not set queue depth (nvme0n2) 00:19:01.906 Could not set queue depth (nvme0n3) 00:19:01.906 Could not set queue depth (nvme0n4) 00:19:02.164 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.165 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.165 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.165 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.165 fio-3.35 00:19:02.165 Starting 4 threads 00:19:03.537 00:19:03.537 job0: (groupid=0, jobs=1): err= 0: pid=3599420: Sun Jul 14 18:50:51 2024 00:19:03.537 read: IOPS=1780, BW=7121KiB/s (7292kB/s)(7128KiB/1001msec) 00:19:03.537 slat (nsec): min=4520, max=54143, avg=12553.88, stdev=6237.82 00:19:03.537 clat (usec): min=190, max=876, avg=296.71, stdev=57.83 00:19:03.537 lat (usec): min=196, max=885, avg=309.26, stdev=60.46 00:19:03.537 clat percentiles (usec): 00:19:03.537 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 241], 00:19:03.537 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:19:03.537 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:19:03.537 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 783], 99.95th=[ 881], 00:19:03.537 | 99.99th=[ 881] 00:19:03.537 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:03.537 slat (nsec): min=6384, max=55759, avg=14608.69, stdev=7459.64 00:19:03.537 clat (usec): min=135, max=587, avg=197.09, stdev=41.84 00:19:03.537 lat (usec): min=143, max=596, avg=211.70, stdev=44.75 00:19:03.537 clat percentiles (usec): 00:19:03.537 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:19:03.537 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 200], 00:19:03.537 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 273], 00:19:03.537 | 99.00th=[ 338], 99.50th=[ 379], 99.90th=[ 465], 99.95th=[ 469], 00:19:03.537 | 99.99th=[ 586] 00:19:03.537 bw ( KiB/s): min= 8192, max= 8192, per=46.04%, avg=8192.00, stdev= 0.00, samples=1 00:19:03.537 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:03.537 lat (usec) : 250=60.21%, 500=39.63%, 750=0.08%, 1000=0.08% 00:19:03.537 cpu : usr=4.40%, sys=6.40%, ctx=3833, majf=0, minf=1 00:19:03.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.537 issued rwts: total=1782,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.537 job1: (groupid=0, jobs=1): err= 0: pid=3599421: Sun Jul 14 18:50:51 2024 00:19:03.537 read: IOPS=157, BW=631KiB/s (647kB/s)(632KiB/1001msec) 00:19:03.537 slat (nsec): min=5874, max=36880, avg=10543.47, stdev=7752.84 00:19:03.537 clat (usec): min=248, max=42144, avg=5529.58, stdev=13825.77 00:19:03.537 lat (usec): min=255, max=42151, avg=5540.12, stdev=13830.78 00:19:03.537 clat percentiles (usec): 00:19:03.537 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:19:03.537 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:19:03.537 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[41157], 95.00th=[42206], 00:19:03.537 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:03.537 | 99.99th=[42206] 00:19:03.537 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:03.538 slat (nsec): min=7705, max=43139, avg=13315.42, stdev=6758.59 00:19:03.538 clat (usec): min=174, max=393, avg=225.64, stdev=27.17 00:19:03.538 lat (usec): min=183, max=401, avg=238.95, stdev=30.56 00:19:03.538 clat percentiles (usec): 00:19:03.538 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:19:03.538 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:19:03.538 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 277], 00:19:03.538 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 392], 99.95th=[ 392], 00:19:03.538 | 99.99th=[ 392] 00:19:03.538 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.538 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.538 lat (usec) : 250=64.63%, 500=32.39% 00:19:03.538 lat (msec) : 50=2.99% 00:19:03.538 cpu : usr=0.80%, sys=0.80%, ctx=671, majf=0, minf=1 00:19:03.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 issued rwts: total=158,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.538 job2: (groupid=0, jobs=1): err= 0: pid=3599422: Sun Jul 14 18:50:51 2024 00:19:03.538 read: IOPS=1404, BW=5619KiB/s (5754kB/s)(5788KiB/1030msec) 00:19:03.538 slat (nsec): min=4358, max=55780, avg=16430.81, stdev=9719.81 00:19:03.538 clat (usec): min=213, max=42155, avg=469.60, stdev=2413.72 00:19:03.538 lat (usec): min=227, max=42171, avg=486.03, stdev=2413.67 00:19:03.538 clat percentiles (usec): 00:19:03.538 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 277], 00:19:03.538 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 343], 00:19:03.538 | 70.00th=[ 367], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 457], 00:19:03.538 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[41681], 99.95th=[42206], 00:19:03.538 | 99.99th=[42206] 00:19:03.538 write: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec); 0 zone resets 00:19:03.538 slat (nsec): min=5898, max=43132, avg=13946.77, stdev=5222.01 00:19:03.538 clat (usec): min=146, max=331, avg=190.17, stdev=28.22 00:19:03.538 lat (usec): min=157, max=340, avg=204.12, stdev=27.00 00:19:03.538 clat percentiles (usec): 00:19:03.538 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:19:03.538 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:19:03.538 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 243], 00:19:03.538 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 318], 99.95th=[ 330], 00:19:03.538 | 99.99th=[ 330] 00:19:03.538 bw ( KiB/s): min= 4096, max= 8192, per=34.53%, avg=6144.00, stdev=2896.31, samples=2 00:19:03.538 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:19:03.538 lat (usec) : 250=54.48%, 500=44.75%, 750=0.57%, 1000=0.03% 00:19:03.538 lat (msec) : 50=0.17% 00:19:03.538 cpu : usr=2.04%, sys=4.96%, ctx=2984, majf=0, minf=2 00:19:03.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 issued rwts: total=1447,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.538 job3: (groupid=0, jobs=1): err= 0: pid=3599423: Sun Jul 14 18:50:51 2024 00:19:03.538 read: IOPS=384, BW=1537KiB/s (1574kB/s)(1592KiB/1036msec) 00:19:03.538 slat (nsec): min=4636, max=51145, avg=15195.70, stdev=10185.60 00:19:03.538 clat (usec): min=220, max=42584, avg=2287.28, stdev=8807.98 00:19:03.538 lat (usec): min=227, max=42598, avg=2302.47, stdev=8809.78 00:19:03.538 clat percentiles (usec): 00:19:03.538 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 258], 00:19:03.538 | 30.00th=[ 273], 40.00th=[ 297], 50.00th=[ 322], 60.00th=[ 343], 00:19:03.538 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 627], 00:19:03.538 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:03.538 | 99.99th=[42730] 00:19:03.538 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:19:03.538 slat (nsec): min=6210, max=61611, avg=10918.97, stdev=6488.86 00:19:03.538 clat (usec): min=167, max=560, avg=215.95, stdev=34.07 00:19:03.538 lat (usec): min=173, max=622, avg=226.87, stdev=36.99 00:19:03.538 clat percentiles (usec): 00:19:03.538 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:19:03.538 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 219], 00:19:03.538 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 273], 00:19:03.538 | 99.00th=[ 318], 99.50th=[ 367], 99.90th=[ 562], 99.95th=[ 562], 00:19:03.538 | 99.99th=[ 562] 00:19:03.538 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:19:03.538 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:03.538 lat (usec) : 250=57.25%, 500=39.89%, 750=0.77% 00:19:03.538 lat (msec) : 50=2.09% 00:19:03.538 cpu : usr=0.48%, sys=1.26%, ctx=911, majf=0, minf=1 00:19:03.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.538 issued rwts: total=398,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:03.538 00:19:03.538 Run status group 0 (all jobs): 00:19:03.538 READ: bw=14.3MiB/s (15.0MB/s), 631KiB/s-7121KiB/s (647kB/s-7292kB/s), io=14.8MiB (15.5MB), run=1001-1036msec 00:19:03.538 WRITE: bw=17.4MiB/s (18.2MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:19:03.538 00:19:03.538 Disk stats (read/write): 00:19:03.538 nvme0n1: ios=1576/1641, merge=0/0, ticks=786/311, in_queue=1097, util=97.19% 00:19:03.538 nvme0n2: ios=188/512, merge=0/0, ticks=1010/115, in_queue=1125, util=98.07% 00:19:03.538 nvme0n3: ios=1345/1536, merge=0/0, ticks=815/286, in_queue=1101, util=97.91% 00:19:03.538 nvme0n4: ios=143/512, merge=0/0, ticks=703/104, in_queue=807, util=89.68% 00:19:03.538 18:50:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:03.538 [global] 00:19:03.538 thread=1 00:19:03.538 invalidate=1 00:19:03.538 rw=write 00:19:03.538 time_based=1 00:19:03.538 runtime=1 00:19:03.538 ioengine=libaio 00:19:03.538 direct=1 00:19:03.538 bs=4096 00:19:03.538 iodepth=128 00:19:03.538 norandommap=0 00:19:03.538 numjobs=1 00:19:03.538 00:19:03.538 verify_dump=1 00:19:03.538 verify_backlog=512 00:19:03.538 verify_state_save=0 00:19:03.538 do_verify=1 00:19:03.538 verify=crc32c-intel 00:19:03.538 [job0] 00:19:03.538 filename=/dev/nvme0n1 00:19:03.538 [job1] 00:19:03.538 filename=/dev/nvme0n2 00:19:03.538 [job2] 00:19:03.538 filename=/dev/nvme0n3 00:19:03.538 [job3] 00:19:03.538 filename=/dev/nvme0n4 00:19:03.538 Could not set queue depth (nvme0n1) 00:19:03.538 Could not set queue depth (nvme0n2) 00:19:03.538 Could not set queue depth (nvme0n3) 00:19:03.538 Could not set queue depth (nvme0n4) 00:19:03.538 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.538 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.538 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.538 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:03.538 fio-3.35 00:19:03.538 Starting 4 threads 00:19:04.912 00:19:04.912 job0: (groupid=0, jobs=1): err= 0: pid=3599734: Sun Jul 14 18:50:52 2024 00:19:04.912 read: IOPS=4966, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec) 00:19:04.912 slat (usec): min=2, max=11118, avg=97.02, stdev=695.37 00:19:04.912 clat (usec): min=2442, max=39388, avg=12196.89, stdev=3413.04 00:19:04.912 lat (usec): min=3339, max=50191, avg=12293.91, stdev=3475.17 00:19:04.912 clat percentiles (usec): 00:19:04.912 | 1.00th=[ 5669], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10028], 00:19:04.912 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[12125], 00:19:04.912 | 70.00th=[12518], 80.00th=[13960], 90.00th=[16909], 95.00th=[18744], 00:19:04.912 | 99.00th=[22414], 99.50th=[26346], 99.90th=[39584], 99.95th=[39584], 00:19:04.912 | 99.99th=[39584] 00:19:04.912 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:04.912 slat (usec): min=4, max=14193, avg=91.90, stdev=641.71 00:19:04.912 clat (usec): min=622, max=89284, avg=12993.79, stdev=10756.23 00:19:04.912 lat (usec): min=630, max=89291, avg=13085.70, stdev=10822.88 00:19:04.912 clat percentiles (usec): 00:19:04.912 | 1.00th=[ 2507], 5.00th=[ 5538], 10.00th=[ 7111], 20.00th=[ 8979], 00:19:04.912 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:19:04.912 | 70.00th=[12125], 80.00th=[12649], 90.00th=[15270], 95.00th=[29754], 00:19:04.912 | 99.00th=[76022], 99.50th=[83362], 99.90th=[87557], 99.95th=[89654], 00:19:04.912 | 99.99th=[89654] 00:19:04.912 bw ( KiB/s): min=20472, max=20488, per=30.69%, avg=20480.00, stdev=11.31, samples=2 00:19:04.912 iops : min= 5118, max= 5122, avg=5120.00, stdev= 2.83, samples=2 00:19:04.912 lat (usec) : 750=0.05%, 1000=0.05% 00:19:04.912 lat (msec) : 2=0.08%, 4=1.07%, 10=24.32%, 20=69.60%, 50=3.57% 00:19:04.912 lat (msec) : 100=1.26% 00:19:04.912 cpu : usr=5.48%, sys=6.78%, ctx=493, majf=0, minf=1 00:19:04.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:04.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.912 issued rwts: total=4986,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.912 job1: (groupid=0, jobs=1): err= 0: pid=3599756: Sun Jul 14 18:50:52 2024 00:19:04.912 read: IOPS=3045, BW=11.9MiB/s (12.5MB/s)(12.5MiB/1050msec) 00:19:04.912 slat (usec): min=2, max=20701, avg=137.52, stdev=993.99 00:19:04.912 clat (usec): min=4966, max=71198, avg=17982.02, stdev=13429.74 00:19:04.912 lat (usec): min=4972, max=71216, avg=18119.54, stdev=13496.23 00:19:04.912 clat percentiles (usec): 00:19:04.912 | 1.00th=[ 5604], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10814], 00:19:04.912 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11600], 60.00th=[14091], 00:19:04.912 | 70.00th=[16909], 80.00th=[22414], 90.00th=[33817], 95.00th=[54789], 00:19:04.912 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:19:04.912 | 99.99th=[70779] 00:19:04.912 write: IOPS=3413, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1050msec); 0 zone resets 00:19:04.912 slat (usec): min=4, max=12551, avg=149.67, stdev=780.37 00:19:04.912 clat (usec): min=1167, max=105766, avg=21112.49, stdev=19413.98 00:19:04.912 lat (usec): min=1176, max=105773, avg=21262.17, stdev=19516.03 00:19:04.912 clat percentiles (msec): 00:19:04.912 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 12], 00:19:04.912 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 16], 00:19:04.912 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 46], 95.00th=[ 69], 00:19:04.912 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 106], 00:19:04.912 | 99.99th=[ 106] 00:19:04.912 bw ( KiB/s): min= 8192, max=20464, per=21.47%, avg=14328.00, stdev=8677.61, samples=2 00:19:04.912 iops : min= 2048, max= 5116, avg=3582.00, stdev=2169.40, samples=2 00:19:04.912 lat (msec) : 2=0.28%, 4=0.86%, 10=13.67%, 20=55.99%, 50=21.62% 00:19:04.912 lat (msec) : 100=6.90%, 250=0.69% 00:19:04.912 cpu : usr=3.62%, sys=5.72%, ctx=506, majf=0, minf=1 00:19:04.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:04.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.913 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.913 job2: (groupid=0, jobs=1): err= 0: pid=3599770: Sun Jul 14 18:50:52 2024 00:19:04.913 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:19:04.913 slat (usec): min=2, max=10399, avg=109.75, stdev=627.26 00:19:04.913 clat (usec): min=4256, max=24814, avg=13450.62, stdev=2475.55 00:19:04.913 lat (usec): min=4261, max=24829, avg=13560.37, stdev=2526.76 00:19:04.913 clat percentiles (usec): 00:19:04.913 | 1.00th=[ 6063], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11863], 00:19:04.913 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12911], 60.00th=[14222], 00:19:04.913 | 70.00th=[14484], 80.00th=[14877], 90.00th=[17171], 95.00th=[18482], 00:19:04.913 | 99.00th=[20055], 99.50th=[20317], 99.90th=[21103], 99.95th=[21627], 00:19:04.913 | 99.99th=[24773] 00:19:04.913 write: IOPS=4002, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1011msec); 0 zone resets 00:19:04.913 slat (usec): min=3, max=45085, avg=140.58, stdev=1461.73 00:19:04.913 clat (usec): min=1515, max=192158, avg=16373.95, stdev=15497.95 00:19:04.913 lat (usec): min=1519, max=192167, avg=16514.53, stdev=15745.51 00:19:04.913 clat percentiles (msec): 00:19:04.913 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 11], 20.00th=[ 12], 00:19:04.913 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:19:04.913 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 20], 95.00th=[ 37], 00:19:04.913 | 99.00th=[ 88], 99.50th=[ 131], 99.90th=[ 174], 99.95th=[ 192], 00:19:04.913 | 99.99th=[ 192] 00:19:04.913 bw ( KiB/s): min=13880, max=17472, per=23.49%, avg=15676.00, stdev=2539.93, samples=2 00:19:04.913 iops : min= 3470, max= 4368, avg=3919.00, stdev=634.98, samples=2 00:19:04.913 lat (msec) : 2=0.37%, 4=0.69%, 10=6.46%, 20=86.73%, 50=4.90% 00:19:04.913 lat (msec) : 100=0.43%, 250=0.42% 00:19:04.913 cpu : usr=3.96%, sys=6.34%, ctx=496, majf=0, minf=1 00:19:04.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:04.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.913 issued rwts: total=3584,4047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.913 job3: (groupid=0, jobs=1): err= 0: pid=3599771: Sun Jul 14 18:50:52 2024 00:19:04.913 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:19:04.913 slat (usec): min=2, max=13687, avg=94.90, stdev=664.84 00:19:04.913 clat (usec): min=1020, max=62247, avg=13420.18, stdev=6152.28 00:19:04.913 lat (usec): min=1028, max=62252, avg=13515.07, stdev=6191.35 00:19:04.913 clat percentiles (usec): 00:19:04.913 | 1.00th=[ 1237], 5.00th=[ 3195], 10.00th=[ 7898], 20.00th=[11338], 00:19:04.913 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:19:04.913 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16712], 95.00th=[21627], 00:19:04.913 | 99.00th=[32113], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:19:04.913 | 99.99th=[62129] 00:19:04.913 write: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1007msec); 0 zone resets 00:19:04.913 slat (usec): min=3, max=18177, avg=90.50, stdev=664.29 00:19:04.913 clat (usec): min=251, max=59723, avg=13844.40, stdev=6826.28 00:19:04.913 lat (usec): min=560, max=59743, avg=13934.90, stdev=6864.97 00:19:04.913 clat percentiles (usec): 00:19:04.913 | 1.00th=[ 1467], 5.00th=[ 4080], 10.00th=[ 6194], 20.00th=[10421], 00:19:04.913 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13829], 60.00th=[14484], 00:19:04.913 | 70.00th=[14615], 80.00th=[15664], 90.00th=[20317], 95.00th=[22676], 00:19:04.913 | 99.00th=[45351], 99.50th=[54264], 99.90th=[55313], 99.95th=[59507], 00:19:04.913 | 99.99th=[59507] 00:19:04.913 bw ( KiB/s): min=16624, max=20480, per=27.80%, avg=18552.00, stdev=2726.60, samples=2 00:19:04.913 iops : min= 4156, max= 5120, avg=4638.00, stdev=681.65, samples=2 00:19:04.913 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.18% 00:19:04.913 lat (msec) : 2=1.49%, 4=3.23%, 10=9.23%, 20=75.68%, 50=9.27% 00:19:04.913 lat (msec) : 100=0.83% 00:19:04.913 cpu : usr=3.18%, sys=4.97%, ctx=514, majf=0, minf=1 00:19:04.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:04.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.913 issued rwts: total=4608,4766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.913 00:19:04.913 Run status group 0 (all jobs): 00:19:04.913 READ: bw=60.9MiB/s (63.9MB/s), 11.9MiB/s-19.4MiB/s (12.5MB/s-20.3MB/s), io=64.0MiB (67.1MB), run=1004-1050msec 00:19:04.913 WRITE: bw=65.2MiB/s (68.3MB/s), 13.3MiB/s-19.9MiB/s (14.0MB/s-20.9MB/s), io=68.4MiB (71.7MB), run=1004-1050msec 00:19:04.913 00:19:04.913 Disk stats (read/write): 00:19:04.913 nvme0n1: ios=4136/4279, merge=0/0, ticks=48008/55365, in_queue=103373, util=97.80% 00:19:04.913 nvme0n2: ios=2587/2647, merge=0/0, ticks=43612/61364, in_queue=104976, util=97.04% 00:19:04.913 nvme0n3: ios=3108/3327, merge=0/0, ticks=23968/25710, in_queue=49678, util=96.42% 00:19:04.913 nvme0n4: ios=3827/4096, merge=0/0, ticks=38648/42605, in_queue=81253, util=96.18% 00:19:04.913 18:50:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:04.913 [global] 00:19:04.913 thread=1 00:19:04.913 invalidate=1 00:19:04.913 rw=randwrite 00:19:04.913 time_based=1 00:19:04.913 runtime=1 00:19:04.913 ioengine=libaio 00:19:04.913 direct=1 00:19:04.913 bs=4096 00:19:04.913 iodepth=128 00:19:04.913 norandommap=0 00:19:04.913 numjobs=1 00:19:04.913 00:19:04.913 verify_dump=1 00:19:04.913 verify_backlog=512 00:19:04.913 verify_state_save=0 00:19:04.913 do_verify=1 00:19:04.913 verify=crc32c-intel 00:19:04.913 [job0] 00:19:04.913 filename=/dev/nvme0n1 00:19:04.913 [job1] 00:19:04.913 filename=/dev/nvme0n2 00:19:04.913 [job2] 00:19:04.913 filename=/dev/nvme0n3 00:19:04.913 [job3] 00:19:04.913 filename=/dev/nvme0n4 00:19:04.913 Could not set queue depth (nvme0n1) 00:19:04.913 Could not set queue depth (nvme0n2) 00:19:04.913 Could not set queue depth (nvme0n3) 00:19:04.913 Could not set queue depth (nvme0n4) 00:19:05.251 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.251 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.251 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.251 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.251 fio-3.35 00:19:05.251 Starting 4 threads 00:19:06.201 00:19:06.201 job0: (groupid=0, jobs=1): err= 0: pid=3599997: Sun Jul 14 18:50:54 2024 00:19:06.201 read: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1007msec) 00:19:06.201 slat (usec): min=2, max=12951, avg=151.60, stdev=902.05 00:19:06.201 clat (usec): min=4007, max=37122, avg=18020.91, stdev=6588.90 00:19:06.201 lat (usec): min=7616, max=37131, avg=18172.51, stdev=6624.94 00:19:06.201 clat percentiles (usec): 00:19:06.201 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11994], 00:19:06.201 | 30.00th=[12387], 40.00th=[14615], 50.00th=[16909], 60.00th=[19268], 00:19:06.201 | 70.00th=[22938], 80.00th=[23462], 90.00th=[24773], 95.00th=[32375], 00:19:06.201 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:19:06.201 | 99.99th=[36963] 00:19:06.201 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:19:06.201 slat (usec): min=3, max=12321, avg=169.73, stdev=935.56 00:19:06.201 clat (usec): min=3534, max=42906, avg=23687.71, stdev=9126.96 00:19:06.201 lat (usec): min=3541, max=42917, avg=23857.43, stdev=9157.37 00:19:06.201 clat percentiles (usec): 00:19:06.201 | 1.00th=[ 3916], 5.00th=[10945], 10.00th=[11994], 20.00th=[14222], 00:19:06.201 | 30.00th=[18744], 40.00th=[21365], 50.00th=[22938], 60.00th=[25035], 00:19:06.201 | 70.00th=[28443], 80.00th=[33817], 90.00th=[36439], 95.00th=[40109], 00:19:06.201 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:19:06.201 | 99.99th=[42730] 00:19:06.201 bw ( KiB/s): min=12064, max=12536, per=18.92%, avg=12300.00, stdev=333.75, samples=2 00:19:06.201 iops : min= 3016, max= 3134, avg=3075.00, stdev=83.44, samples=2 00:19:06.201 lat (msec) : 4=0.71%, 10=4.56%, 20=42.25%, 50=52.49% 00:19:06.201 cpu : usr=3.38%, sys=3.58%, ctx=316, majf=0, minf=1 00:19:06.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:06.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.201 issued rwts: total=3021,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.201 job1: (groupid=0, jobs=1): err= 0: pid=3599998: Sun Jul 14 18:50:54 2024 00:19:06.201 read: IOPS=4859, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1002msec) 00:19:06.201 slat (usec): min=2, max=43246, avg=106.47, stdev=867.09 00:19:06.201 clat (usec): min=541, max=53987, avg=13314.32, stdev=7633.96 00:19:06.201 lat (usec): min=2942, max=53996, avg=13420.79, stdev=7666.46 00:19:06.201 clat percentiles (usec): 00:19:06.201 | 1.00th=[ 6325], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:19:06.201 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:19:06.201 | 70.00th=[11994], 80.00th=[12518], 90.00th=[17433], 95.00th=[23462], 00:19:06.201 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:19:06.201 | 99.99th=[53740] 00:19:06.201 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:19:06.201 slat (usec): min=3, max=6748, avg=86.05, stdev=427.31 00:19:06.201 clat (usec): min=3377, max=53921, avg=12075.59, stdev=4482.38 00:19:06.202 lat (usec): min=3383, max=53933, avg=12161.64, stdev=4483.03 00:19:06.202 clat percentiles (usec): 00:19:06.202 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10552], 00:19:06.202 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:19:06.202 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13960], 95.00th=[16909], 00:19:06.202 | 99.00th=[39584], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:19:06.202 | 99.99th=[53740] 00:19:06.202 bw ( KiB/s): min=20480, max=20480, per=31.50%, avg=20480.00, stdev= 0.00, samples=2 00:19:06.202 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:06.202 lat (usec) : 750=0.01% 00:19:06.202 lat (msec) : 4=0.38%, 10=11.46%, 20=83.11%, 50=3.78%, 100=1.25% 00:19:06.202 cpu : usr=5.49%, sys=7.79%, ctx=596, majf=0, minf=1 00:19:06.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:06.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.202 issued rwts: total=4869,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.202 job2: (groupid=0, jobs=1): err= 0: pid=3599999: Sun Jul 14 18:50:54 2024 00:19:06.202 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:19:06.202 slat (usec): min=2, max=22457, avg=188.79, stdev=1188.13 00:19:06.202 clat (usec): min=3499, max=59063, avg=22493.32, stdev=9275.48 00:19:06.202 lat (usec): min=7981, max=59076, avg=22682.11, stdev=9340.38 00:19:06.202 clat percentiles (usec): 00:19:06.202 | 1.00th=[ 9110], 5.00th=[12780], 10.00th=[15139], 20.00th=[16909], 00:19:06.202 | 30.00th=[17433], 40.00th=[18744], 50.00th=[20055], 60.00th=[20317], 00:19:06.202 | 70.00th=[22414], 80.00th=[28181], 90.00th=[35390], 95.00th=[40633], 00:19:06.202 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57410], 99.95th=[58459], 00:19:06.202 | 99.99th=[58983] 00:19:06.202 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:19:06.202 slat (usec): min=3, max=14123, avg=161.70, stdev=744.53 00:19:06.202 clat (usec): min=7011, max=58501, avg=22521.87, stdev=8293.26 00:19:06.202 lat (usec): min=7954, max=58508, avg=22683.57, stdev=8340.87 00:19:06.202 clat percentiles (usec): 00:19:06.202 | 1.00th=[ 8160], 5.00th=[11994], 10.00th=[15139], 20.00th=[16057], 00:19:06.202 | 30.00th=[18482], 40.00th=[20579], 50.00th=[22676], 60.00th=[22938], 00:19:06.202 | 70.00th=[23462], 80.00th=[23987], 90.00th=[31065], 95.00th=[40633], 00:19:06.202 | 99.00th=[54264], 99.50th=[55313], 99.90th=[58459], 99.95th=[58459], 00:19:06.202 | 99.99th=[58459] 00:19:06.202 bw ( KiB/s): min=11096, max=12536, per=18.17%, avg=11816.00, stdev=1018.23, samples=2 00:19:06.202 iops : min= 2774, max= 3134, avg=2954.00, stdev=254.56, samples=2 00:19:06.202 lat (msec) : 4=0.02%, 10=2.48%, 20=38.85%, 50=56.49%, 100=2.16% 00:19:06.202 cpu : usr=2.78%, sys=3.97%, ctx=354, majf=0, minf=1 00:19:06.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:06.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.202 issued rwts: total=2570,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.202 job3: (groupid=0, jobs=1): err= 0: pid=3600000: Sun Jul 14 18:50:54 2024 00:19:06.202 read: IOPS=4841, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1004msec) 00:19:06.202 slat (usec): min=3, max=7077, avg=97.73, stdev=564.26 00:19:06.202 clat (usec): min=903, max=24963, avg=12452.19, stdev=2523.55 00:19:06.202 lat (usec): min=3704, max=25103, avg=12549.92, stdev=2560.37 00:19:06.202 clat percentiles (usec): 00:19:06.202 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11207], 00:19:06.202 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:19:06.202 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15008], 95.00th=[17171], 00:19:06.202 | 99.00th=[22414], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:19:06.202 | 99.99th=[25035] 00:19:06.202 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:19:06.202 slat (usec): min=4, max=8799, avg=94.67, stdev=589.48 00:19:06.202 clat (usec): min=5136, max=32417, avg=12870.73, stdev=3500.63 00:19:06.202 lat (usec): min=5145, max=32425, avg=12965.40, stdev=3544.28 00:19:06.202 clat percentiles (usec): 00:19:06.202 | 1.00th=[ 7439], 5.00th=[10159], 10.00th=[11076], 20.00th=[11469], 00:19:06.202 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:19:06.202 | 70.00th=[12649], 80.00th=[13173], 90.00th=[15008], 95.00th=[21627], 00:19:06.202 | 99.00th=[28705], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375], 00:19:06.202 | 99.99th=[32375] 00:19:06.202 bw ( KiB/s): min=19551, max=21448, per=31.53%, avg=20499.50, stdev=1341.38, samples=2 00:19:06.202 iops : min= 4887, max= 5362, avg=5124.50, stdev=335.88, samples=2 00:19:06.202 lat (usec) : 1000=0.01% 00:19:06.202 lat (msec) : 4=0.11%, 10=6.29%, 20=89.63%, 50=3.96% 00:19:06.202 cpu : usr=5.68%, sys=7.78%, ctx=422, majf=0, minf=1 00:19:06.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:06.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.202 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.202 00:19:06.202 Run status group 0 (all jobs): 00:19:06.202 READ: bw=59.4MiB/s (62.3MB/s), 9.96MiB/s-19.0MiB/s (10.4MB/s-19.9MB/s), io=59.8MiB (62.8MB), run=1002-1008msec 00:19:06.202 WRITE: bw=63.5MiB/s (66.6MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1002-1008msec 00:19:06.202 00:19:06.202 Disk stats (read/write): 00:19:06.202 nvme0n1: ios=2449/2560, merge=0/0, ticks=22041/26990, in_queue=49031, util=87.17% 00:19:06.202 nvme0n2: ios=4146/4199, merge=0/0, ticks=16607/14146, in_queue=30753, util=93.71% 00:19:06.202 nvme0n3: ios=2442/2560, merge=0/0, ticks=23375/21858, in_queue=45233, util=97.60% 00:19:06.202 nvme0n4: ios=4149/4255, merge=0/0, ticks=25714/24186, in_queue=49900, util=98.32% 00:19:06.202 18:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:06.202 18:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3600139 00:19:06.202 18:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:06.202 18:50:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:06.202 [global] 00:19:06.202 thread=1 00:19:06.202 invalidate=1 00:19:06.202 rw=read 00:19:06.202 time_based=1 00:19:06.202 runtime=10 00:19:06.202 ioengine=libaio 00:19:06.202 direct=1 00:19:06.202 bs=4096 00:19:06.202 iodepth=1 00:19:06.202 norandommap=1 00:19:06.202 numjobs=1 00:19:06.202 00:19:06.202 [job0] 00:19:06.202 filename=/dev/nvme0n1 00:19:06.202 [job1] 00:19:06.202 filename=/dev/nvme0n2 00:19:06.202 [job2] 00:19:06.202 filename=/dev/nvme0n3 00:19:06.202 [job3] 00:19:06.202 filename=/dev/nvme0n4 00:19:06.460 Could not set queue depth (nvme0n1) 00:19:06.460 Could not set queue depth (nvme0n2) 00:19:06.460 Could not set queue depth (nvme0n3) 00:19:06.460 Could not set queue depth (nvme0n4) 00:19:06.460 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.460 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.460 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.460 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.460 fio-3.35 00:19:06.460 Starting 4 threads 00:19:09.748 18:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:09.748 18:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:09.748 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33865728, buflen=4096 00:19:09.748 fio: pid=3600240, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:09.748 18:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:09.748 18:50:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:09.748 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=35487744, buflen=4096 00:19:09.748 fio: pid=3600239, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:10.006 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=52178944, buflen=4096 00:19:10.006 fio: pid=3600237, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:10.264 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.264 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:10.523 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.523 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:10.523 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1478656, buflen=4096 00:19:10.523 fio: pid=3600238, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:10.523 00:19:10.523 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3600237: Sun Jul 14 18:50:58 2024 00:19:10.523 read: IOPS=3704, BW=14.5MiB/s (15.2MB/s)(49.8MiB/3439msec) 00:19:10.523 slat (usec): min=5, max=28647, avg=12.72, stdev=277.88 00:19:10.523 clat (usec): min=190, max=41800, avg=253.07, stdev=371.81 00:19:10.523 lat (usec): min=196, max=41807, avg=265.79, stdev=465.14 00:19:10.523 clat percentiles (usec): 00:19:10.523 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:19:10.523 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 247], 00:19:10.523 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:19:10.523 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 873], 00:19:10.523 | 99.99th=[ 1221] 00:19:10.523 bw ( KiB/s): min=13632, max=15928, per=46.26%, avg=14949.33, stdev=790.43, samples=6 00:19:10.523 iops : min= 3408, max= 3982, avg=3737.33, stdev=197.61, samples=6 00:19:10.523 lat (usec) : 250=62.72%, 500=35.94%, 750=1.26%, 1000=0.05% 00:19:10.523 lat (msec) : 2=0.02%, 50=0.01% 00:19:10.523 cpu : usr=2.97%, sys=4.92%, ctx=12743, majf=0, minf=1 00:19:10.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 issued rwts: total=12740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.523 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3600238: Sun Jul 14 18:50:58 2024 00:19:10.523 read: IOPS=97, BW=388KiB/s (398kB/s)(1444KiB/3717msec) 00:19:10.523 slat (usec): min=6, max=17876, avg=145.14, stdev=1363.93 00:19:10.523 clat (usec): min=269, max=41635, avg=10074.88, stdev=17287.92 00:19:10.523 lat (usec): min=280, max=58981, avg=10176.94, stdev=17448.37 00:19:10.523 clat percentiles (usec): 00:19:10.523 | 1.00th=[ 289], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 334], 00:19:10.523 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 416], 00:19:10.523 | 70.00th=[ 510], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:10.523 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:10.523 | 99.99th=[41681] 00:19:10.523 bw ( KiB/s): min= 93, max= 816, per=1.26%, avg=406.43, stdev=254.62, samples=7 00:19:10.523 iops : min= 23, max= 204, avg=101.57, stdev=63.71, samples=7 00:19:10.523 lat (usec) : 500=68.51%, 750=7.18% 00:19:10.523 lat (msec) : 20=0.28%, 50=23.76% 00:19:10.523 cpu : usr=0.05%, sys=0.16%, ctx=367, majf=0, minf=1 00:19:10.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 issued rwts: total=362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.523 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3600239: Sun Jul 14 18:50:58 2024 00:19:10.523 read: IOPS=2729, BW=10.7MiB/s (11.2MB/s)(33.8MiB/3174msec) 00:19:10.523 slat (nsec): min=5322, max=50238, avg=10098.53, stdev=5146.19 00:19:10.523 clat (usec): min=217, max=41180, avg=351.18, stdev=1443.23 00:19:10.523 lat (usec): min=223, max=41196, avg=361.28, stdev=1443.61 00:19:10.523 clat percentiles (usec): 00:19:10.523 | 1.00th=[ 235], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 258], 00:19:10.523 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 297], 00:19:10.523 | 70.00th=[ 314], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 383], 00:19:10.523 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[40633], 99.95th=[41157], 00:19:10.523 | 99.99th=[41157] 00:19:10.523 bw ( KiB/s): min= 6968, max=13784, per=33.36%, avg=10782.67, stdev=2486.39, samples=6 00:19:10.523 iops : min= 1742, max= 3446, avg=2695.67, stdev=621.60, samples=6 00:19:10.523 lat (usec) : 250=10.49%, 500=88.69%, 750=0.66%, 1000=0.01% 00:19:10.523 lat (msec) : 2=0.01%, 50=0.13% 00:19:10.523 cpu : usr=1.95%, sys=4.10%, ctx=8667, majf=0, minf=1 00:19:10.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 issued rwts: total=8665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.523 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3600240: Sun Jul 14 18:50:58 2024 00:19:10.523 read: IOPS=2824, BW=11.0MiB/s (11.6MB/s)(32.3MiB/2928msec) 00:19:10.523 slat (nsec): min=4289, max=64522, avg=11174.73, stdev=7259.51 00:19:10.523 clat (usec): min=206, max=42153, avg=337.49, stdev=1509.67 00:19:10.523 lat (usec): min=210, max=42161, avg=348.66, stdev=1509.83 00:19:10.523 clat percentiles (usec): 00:19:10.523 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:19:10.523 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 273], 00:19:10.523 | 70.00th=[ 293], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 383], 00:19:10.523 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[41157], 99.95th=[42206], 00:19:10.523 | 99.99th=[42206] 00:19:10.523 bw ( KiB/s): min= 7576, max=15936, per=39.47%, avg=12755.20, stdev=3529.10, samples=5 00:19:10.523 iops : min= 1894, max= 3984, avg=3188.80, stdev=882.27, samples=5 00:19:10.523 lat (usec) : 250=47.91%, 500=51.23%, 750=0.69%, 1000=0.01% 00:19:10.523 lat (msec) : 2=0.01%, 50=0.13% 00:19:10.523 cpu : usr=1.67%, sys=3.89%, ctx=8269, majf=0, minf=1 00:19:10.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:10.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.523 issued rwts: total=8269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:10.523 00:19:10.523 Run status group 0 (all jobs): 00:19:10.523 READ: bw=31.6MiB/s (33.1MB/s), 388KiB/s-14.5MiB/s (398kB/s-15.2MB/s), io=117MiB (123MB), run=2928-3717msec 00:19:10.523 00:19:10.523 Disk stats (read/write): 00:19:10.523 nvme0n1: ios=12502/0, merge=0/0, ticks=3225/0, in_queue=3225, util=98.57% 00:19:10.523 nvme0n2: ios=358/0, merge=0/0, ticks=3552/0, in_queue=3552, util=96.33% 00:19:10.523 nvme0n3: ios=8519/0, merge=0/0, ticks=3054/0, in_queue=3054, util=99.69% 00:19:10.523 nvme0n4: ios=8266/0, merge=0/0, ticks=2620/0, in_queue=2620, util=96.74% 00:19:10.783 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:10.783 18:50:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:11.043 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.043 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:11.043 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.043 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:11.300 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.300 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:11.559 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:11.559 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3600139 00:19:11.559 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:11.559 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:11.817 nvmf hotplug test: fio failed as expected 00:19:11.817 18:50:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.074 rmmod nvme_tcp 00:19:12.074 rmmod nvme_fabrics 00:19:12.074 rmmod nvme_keyring 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3598121 ']' 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3598121 00:19:12.074 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3598121 ']' 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3598121 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3598121 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3598121' 00:19:12.075 killing process with pid 3598121 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3598121 00:19:12.075 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3598121 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.332 18:51:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.865 18:51:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.865 00:19:14.865 real 0m23.414s 00:19:14.865 user 1m22.223s 00:19:14.865 sys 0m7.125s 00:19:14.865 18:51:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.865 18:51:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.865 ************************************ 00:19:14.865 END TEST nvmf_fio_target 00:19:14.865 ************************************ 00:19:14.865 18:51:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:14.865 18:51:02 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:14.865 18:51:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:14.865 18:51:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.865 18:51:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.865 ************************************ 00:19:14.865 START TEST nvmf_bdevio 00:19:14.865 ************************************ 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:14.865 * Looking for test storage... 00:19:14.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.865 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.866 18:51:02 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:16.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.781 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:16.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:16.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:16.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:16.782 00:19:16.782 --- 10.0.0.2 ping statistics --- 00:19:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.782 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:16.782 00:19:16.782 --- 10.0.0.1 ping statistics --- 00:19:16.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.782 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3602969 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3602969 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3602969 ']' 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.782 18:51:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:16.782 [2024-07-14 18:51:04.817663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:16.782 [2024-07-14 18:51:04.817735] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.782 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.782 [2024-07-14 18:51:04.883653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.782 [2024-07-14 18:51:04.976373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.782 [2024-07-14 18:51:04.976429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.782 [2024-07-14 18:51:04.976457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.782 [2024-07-14 18:51:04.976468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.782 [2024-07-14 18:51:04.976478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.782 [2024-07-14 18:51:04.976638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:16.782 [2024-07-14 18:51:04.976688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:16.782 [2024-07-14 18:51:04.976737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:16.782 [2024-07-14 18:51:04.976739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 [2024-07-14 18:51:05.128785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 Malloc0 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.042 [2024-07-14 18:51:05.180803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:17.042 { 00:19:17.042 "params": { 00:19:17.042 "name": "Nvme$subsystem", 00:19:17.042 "trtype": "$TEST_TRANSPORT", 00:19:17.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.042 "adrfam": "ipv4", 00:19:17.042 "trsvcid": "$NVMF_PORT", 00:19:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.042 "hdgst": ${hdgst:-false}, 00:19:17.042 "ddgst": ${ddgst:-false} 00:19:17.042 }, 00:19:17.042 "method": "bdev_nvme_attach_controller" 00:19:17.042 } 00:19:17.042 EOF 00:19:17.042 )") 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:17.042 18:51:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:17.042 "params": { 00:19:17.042 "name": "Nvme1", 00:19:17.042 "trtype": "tcp", 00:19:17.042 "traddr": "10.0.0.2", 00:19:17.042 "adrfam": "ipv4", 00:19:17.042 "trsvcid": "4420", 00:19:17.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.042 "hdgst": false, 00:19:17.042 "ddgst": false 00:19:17.042 }, 00:19:17.042 "method": "bdev_nvme_attach_controller" 00:19:17.042 }' 00:19:17.042 [2024-07-14 18:51:05.227986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:17.042 [2024-07-14 18:51:05.228061] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603000 ] 00:19:17.042 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.302 [2024-07-14 18:51:05.293585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.302 [2024-07-14 18:51:05.385134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.302 [2024-07-14 18:51:05.388901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.302 [2024-07-14 18:51:05.388913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.561 I/O targets: 00:19:17.561 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:17.561 00:19:17.561 00:19:17.561 CUnit - A unit testing framework for C - Version 2.1-3 00:19:17.561 http://cunit.sourceforge.net/ 00:19:17.561 00:19:17.561 00:19:17.561 Suite: bdevio tests on: Nvme1n1 00:19:17.561 Test: blockdev write read block ...passed 00:19:17.561 Test: blockdev write zeroes read block ...passed 00:19:17.561 Test: blockdev write zeroes read no split ...passed 00:19:17.561 Test: blockdev write zeroes read split ...passed 00:19:17.561 Test: blockdev write zeroes read split partial ...passed 00:19:17.561 Test: blockdev reset ...[2024-07-14 18:51:05.722716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.561 [2024-07-14 18:51:05.722826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cfc90 (9): Bad file descriptor 00:19:17.561 [2024-07-14 18:51:05.782602] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:17.561 passed 00:19:17.820 Test: blockdev write read 8 blocks ...passed 00:19:17.820 Test: blockdev write read size > 128k ...passed 00:19:17.820 Test: blockdev write read invalid size ...passed 00:19:17.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:17.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:17.820 Test: blockdev write read max offset ...passed 00:19:17.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:17.820 Test: blockdev writev readv 8 blocks ...passed 00:19:17.820 Test: blockdev writev readv 30 x 1block ...passed 00:19:18.080 Test: blockdev writev readv block ...passed 00:19:18.080 Test: blockdev writev readv size > 128k ...passed 00:19:18.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:18.080 Test: blockdev comparev and writev ...[2024-07-14 18:51:06.083324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.083359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.083392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.083410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.083758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.083783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.083804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.083820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.084131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.084156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.084177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.084193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.084528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.084551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.084572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:18.080 [2024-07-14 18:51:06.084588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.080 passed 00:19:18.080 Test: blockdev nvme passthru rw ...passed 00:19:18.080 Test: blockdev nvme passthru vendor specific ...[2024-07-14 18:51:06.168167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.080 [2024-07-14 18:51:06.168193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.168338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.080 [2024-07-14 18:51:06.168361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.168509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.080 [2024-07-14 18:51:06.168532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.080 [2024-07-14 18:51:06.168672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:18.081 [2024-07-14 18:51:06.168695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.081 passed 00:19:18.081 Test: blockdev nvme admin passthru ...passed 00:19:18.081 Test: blockdev copy ...passed 00:19:18.081 00:19:18.081 Run Summary: Type Total Ran Passed Failed Inactive 00:19:18.081 suites 1 1 n/a 0 0 00:19:18.081 tests 23 23 23 0 0 00:19:18.081 asserts 152 152 152 0 n/a 00:19:18.081 00:19:18.081 Elapsed time = 1.324 seconds 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.338 rmmod nvme_tcp 00:19:18.338 rmmod nvme_fabrics 00:19:18.338 rmmod nvme_keyring 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3602969 ']' 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3602969 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3602969 ']' 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3602969 00:19:18.338 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3602969 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3602969' 00:19:18.339 killing process with pid 3602969 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3602969 00:19:18.339 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3602969 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.596 18:51:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.127 18:51:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.127 00:19:21.127 real 0m6.196s 00:19:21.127 user 0m9.942s 00:19:21.127 sys 0m1.999s 00:19:21.127 18:51:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.127 18:51:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:21.127 ************************************ 00:19:21.127 END TEST nvmf_bdevio 00:19:21.127 ************************************ 00:19:21.127 18:51:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:21.127 18:51:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:21.127 18:51:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:21.127 18:51:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.127 18:51:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:21.127 ************************************ 00:19:21.127 START TEST nvmf_auth_target 00:19:21.127 ************************************ 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:21.127 * Looking for test storage... 00:19:21.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.127 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.128 18:51:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.035 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:23.036 00:19:23.036 --- 10.0.0.2 ping statistics --- 00:19:23.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.036 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:23.036 00:19:23.036 --- 10.0.0.1 ping statistics --- 00:19:23.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.036 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3605624 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3605624 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3605624 ']' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.036 18:51:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.036 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3605708 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d024d6532452c6f0837bc49c82eaeac44b6d084d3575068d 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MVf 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d024d6532452c6f0837bc49c82eaeac44b6d084d3575068d 0 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d024d6532452c6f0837bc49c82eaeac44b6d084d3575068d 0 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d024d6532452c6f0837bc49c82eaeac44b6d084d3575068d 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MVf 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MVf 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.MVf 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e63948bf58ead858dacc4ed54d9fcfceb8d14d63602585ef1537246c7cdccb78 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SPw 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e63948bf58ead858dacc4ed54d9fcfceb8d14d63602585ef1537246c7cdccb78 3 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e63948bf58ead858dacc4ed54d9fcfceb8d14d63602585ef1537246c7cdccb78 3 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e63948bf58ead858dacc4ed54d9fcfceb8d14d63602585ef1537246c7cdccb78 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SPw 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SPw 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.SPw 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a2e1f4bfa85c40a802def83a6b3d3067 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aLc 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a2e1f4bfa85c40a802def83a6b3d3067 1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a2e1f4bfa85c40a802def83a6b3d3067 1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a2e1f4bfa85c40a802def83a6b3d3067 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aLc 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aLc 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.aLc 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0df4eb62e465c01dc8197dec894ced9dbdcd8b306c5e5fc1 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rON 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0df4eb62e465c01dc8197dec894ced9dbdcd8b306c5e5fc1 2 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0df4eb62e465c01dc8197dec894ced9dbdcd8b306c5e5fc1 2 00:19:23.295 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0df4eb62e465c01dc8197dec894ced9dbdcd8b306c5e5fc1 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rON 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rON 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.rON 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8a289b43a34f1375997bd490a0587db7ac2869ff73705f3a 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Dzu 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8a289b43a34f1375997bd490a0587db7ac2869ff73705f3a 2 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8a289b43a34f1375997bd490a0587db7ac2869ff73705f3a 2 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8a289b43a34f1375997bd490a0587db7ac2869ff73705f3a 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:23.296 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Dzu 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Dzu 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Dzu 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=734f9c57af19cb43a15ff7a9e8e85b97 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.shF 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 734f9c57af19cb43a15ff7a9e8e85b97 1 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 734f9c57af19cb43a15ff7a9e8e85b97 1 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=734f9c57af19cb43a15ff7a9e8e85b97 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.shF 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.shF 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.shF 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc980f610a7ce62694d584856cef13f04f21ba3315c673b21779c37fa4130861 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rRs 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc980f610a7ce62694d584856cef13f04f21ba3315c673b21779c37fa4130861 3 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc980f610a7ce62694d584856cef13f04f21ba3315c673b21779c37fa4130861 3 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc980f610a7ce62694d584856cef13f04f21ba3315c673b21779c37fa4130861 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rRs 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rRs 00:19:23.554 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.rRs 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3605624 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3605624 ']' 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.555 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3605708 /var/tmp/host.sock 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3605708 ']' 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:23.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.813 18:51:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MVf 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MVf 00:19:24.072 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MVf 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.SPw ]] 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SPw 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SPw 00:19:24.330 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SPw 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aLc 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aLc 00:19:24.640 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aLc 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.rON ]] 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rON 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rON 00:19:24.897 18:51:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rON 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Dzu 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Dzu 00:19:25.164 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Dzu 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.shF ]] 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.shF 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.shF 00:19:25.421 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.shF 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.rRs 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.rRs 00:19:25.678 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.rRs 00:19:25.934 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:25.934 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:25.934 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.935 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.935 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:25.935 18:51:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.192 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.449 00:19:26.449 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.449 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.449 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.706 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.706 { 00:19:26.706 "cntlid": 1, 00:19:26.706 "qid": 0, 00:19:26.706 "state": "enabled", 00:19:26.707 "thread": "nvmf_tgt_poll_group_000", 00:19:26.707 "listen_address": { 00:19:26.707 "trtype": "TCP", 00:19:26.707 "adrfam": "IPv4", 00:19:26.707 "traddr": "10.0.0.2", 00:19:26.707 "trsvcid": "4420" 00:19:26.707 }, 00:19:26.707 "peer_address": { 00:19:26.707 "trtype": "TCP", 00:19:26.707 "adrfam": "IPv4", 00:19:26.707 "traddr": "10.0.0.1", 00:19:26.707 "trsvcid": "46536" 00:19:26.707 }, 00:19:26.707 "auth": { 00:19:26.707 "state": "completed", 00:19:26.707 "digest": "sha256", 00:19:26.707 "dhgroup": "null" 00:19:26.707 } 00:19:26.707 } 00:19:26.707 ]' 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.707 18:51:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.964 18:51:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.340 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.598 00:19:28.598 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.598 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.598 18:51:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.856 { 00:19:28.856 "cntlid": 3, 00:19:28.856 "qid": 0, 00:19:28.856 "state": "enabled", 00:19:28.856 "thread": "nvmf_tgt_poll_group_000", 00:19:28.856 "listen_address": { 00:19:28.856 "trtype": "TCP", 00:19:28.856 "adrfam": "IPv4", 00:19:28.856 "traddr": "10.0.0.2", 00:19:28.856 "trsvcid": "4420" 00:19:28.856 }, 00:19:28.856 "peer_address": { 00:19:28.856 "trtype": "TCP", 00:19:28.856 "adrfam": "IPv4", 00:19:28.856 "traddr": "10.0.0.1", 00:19:28.856 "trsvcid": "46562" 00:19:28.856 }, 00:19:28.856 "auth": { 00:19:28.856 "state": "completed", 00:19:28.856 "digest": "sha256", 00:19:28.856 "dhgroup": "null" 00:19:28.856 } 00:19:28.856 } 00:19:28.856 ]' 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.856 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.114 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.114 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.114 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.114 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.114 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.372 18:51:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.311 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.569 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.827 00:19:30.827 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.827 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.827 18:51:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.086 { 00:19:31.086 "cntlid": 5, 00:19:31.086 "qid": 0, 00:19:31.086 "state": "enabled", 00:19:31.086 "thread": "nvmf_tgt_poll_group_000", 00:19:31.086 "listen_address": { 00:19:31.086 "trtype": "TCP", 00:19:31.086 "adrfam": "IPv4", 00:19:31.086 "traddr": "10.0.0.2", 00:19:31.086 "trsvcid": "4420" 00:19:31.086 }, 00:19:31.086 "peer_address": { 00:19:31.086 "trtype": "TCP", 00:19:31.086 "adrfam": "IPv4", 00:19:31.086 "traddr": "10.0.0.1", 00:19:31.086 "trsvcid": "46608" 00:19:31.086 }, 00:19:31.086 "auth": { 00:19:31.086 "state": "completed", 00:19:31.086 "digest": "sha256", 00:19:31.086 "dhgroup": "null" 00:19:31.086 } 00:19:31.086 } 00:19:31.086 ]' 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.086 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.344 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.344 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.344 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.344 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.344 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.602 18:51:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.541 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.799 18:51:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.056 00:19:33.057 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.057 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.057 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.315 { 00:19:33.315 "cntlid": 7, 00:19:33.315 "qid": 0, 00:19:33.315 "state": "enabled", 00:19:33.315 "thread": "nvmf_tgt_poll_group_000", 00:19:33.315 "listen_address": { 00:19:33.315 "trtype": "TCP", 00:19:33.315 "adrfam": "IPv4", 00:19:33.315 "traddr": "10.0.0.2", 00:19:33.315 "trsvcid": "4420" 00:19:33.315 }, 00:19:33.315 "peer_address": { 00:19:33.315 "trtype": "TCP", 00:19:33.315 "adrfam": "IPv4", 00:19:33.315 "traddr": "10.0.0.1", 00:19:33.315 "trsvcid": "50854" 00:19:33.315 }, 00:19:33.315 "auth": { 00:19:33.315 "state": "completed", 00:19:33.315 "digest": "sha256", 00:19:33.315 "dhgroup": "null" 00:19:33.315 } 00:19:33.315 } 00:19:33.315 ]' 00:19:33.315 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.573 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.831 18:51:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.766 18:51:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.024 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.281 00:19:35.281 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.281 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.281 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.538 { 00:19:35.538 "cntlid": 9, 00:19:35.538 "qid": 0, 00:19:35.538 "state": "enabled", 00:19:35.538 "thread": "nvmf_tgt_poll_group_000", 00:19:35.538 "listen_address": { 00:19:35.538 "trtype": "TCP", 00:19:35.538 "adrfam": "IPv4", 00:19:35.538 "traddr": "10.0.0.2", 00:19:35.538 "trsvcid": "4420" 00:19:35.538 }, 00:19:35.538 "peer_address": { 00:19:35.538 "trtype": "TCP", 00:19:35.538 "adrfam": "IPv4", 00:19:35.538 "traddr": "10.0.0.1", 00:19:35.538 "trsvcid": "50892" 00:19:35.538 }, 00:19:35.538 "auth": { 00:19:35.538 "state": "completed", 00:19:35.538 "digest": "sha256", 00:19:35.538 "dhgroup": "ffdhe2048" 00:19:35.538 } 00:19:35.538 } 00:19:35.538 ]' 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.538 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.796 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.796 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.796 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.796 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.796 18:51:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.053 18:51:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.986 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.987 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.244 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.502 00:19:37.502 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.502 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.502 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.760 { 00:19:37.760 "cntlid": 11, 00:19:37.760 "qid": 0, 00:19:37.760 "state": "enabled", 00:19:37.760 "thread": "nvmf_tgt_poll_group_000", 00:19:37.760 "listen_address": { 00:19:37.760 "trtype": "TCP", 00:19:37.760 "adrfam": "IPv4", 00:19:37.760 "traddr": "10.0.0.2", 00:19:37.760 "trsvcid": "4420" 00:19:37.760 }, 00:19:37.760 "peer_address": { 00:19:37.760 "trtype": "TCP", 00:19:37.760 "adrfam": "IPv4", 00:19:37.760 "traddr": "10.0.0.1", 00:19:37.760 "trsvcid": "50910" 00:19:37.760 }, 00:19:37.760 "auth": { 00:19:37.760 "state": "completed", 00:19:37.760 "digest": "sha256", 00:19:37.760 "dhgroup": "ffdhe2048" 00:19:37.760 } 00:19:37.760 } 00:19:37.760 ]' 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.760 18:51:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.018 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.018 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.018 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.275 18:51:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.217 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.474 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.731 00:19:39.731 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.731 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.731 18:51:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.988 { 00:19:39.988 "cntlid": 13, 00:19:39.988 "qid": 0, 00:19:39.988 "state": "enabled", 00:19:39.988 "thread": "nvmf_tgt_poll_group_000", 00:19:39.988 "listen_address": { 00:19:39.988 "trtype": "TCP", 00:19:39.988 "adrfam": "IPv4", 00:19:39.988 "traddr": "10.0.0.2", 00:19:39.988 "trsvcid": "4420" 00:19:39.988 }, 00:19:39.988 "peer_address": { 00:19:39.988 "trtype": "TCP", 00:19:39.988 "adrfam": "IPv4", 00:19:39.988 "traddr": "10.0.0.1", 00:19:39.988 "trsvcid": "50954" 00:19:39.988 }, 00:19:39.988 "auth": { 00:19:39.988 "state": "completed", 00:19:39.988 "digest": "sha256", 00:19:39.988 "dhgroup": "ffdhe2048" 00:19:39.988 } 00:19:39.988 } 00:19:39.988 ]' 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.988 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.245 18:51:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.615 18:51:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.872 00:19:41.873 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.873 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.873 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.129 { 00:19:42.129 "cntlid": 15, 00:19:42.129 "qid": 0, 00:19:42.129 "state": "enabled", 00:19:42.129 "thread": "nvmf_tgt_poll_group_000", 00:19:42.129 "listen_address": { 00:19:42.129 "trtype": "TCP", 00:19:42.129 "adrfam": "IPv4", 00:19:42.129 "traddr": "10.0.0.2", 00:19:42.129 "trsvcid": "4420" 00:19:42.129 }, 00:19:42.129 "peer_address": { 00:19:42.129 "trtype": "TCP", 00:19:42.129 "adrfam": "IPv4", 00:19:42.129 "traddr": "10.0.0.1", 00:19:42.129 "trsvcid": "50982" 00:19:42.129 }, 00:19:42.129 "auth": { 00:19:42.129 "state": "completed", 00:19:42.129 "digest": "sha256", 00:19:42.129 "dhgroup": "ffdhe2048" 00:19:42.129 } 00:19:42.129 } 00:19:42.129 ]' 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.129 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.386 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.386 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.386 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.386 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.386 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.644 18:51:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.609 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.867 18:51:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.125 00:19:44.125 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.125 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.125 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.382 { 00:19:44.382 "cntlid": 17, 00:19:44.382 "qid": 0, 00:19:44.382 "state": "enabled", 00:19:44.382 "thread": "nvmf_tgt_poll_group_000", 00:19:44.382 "listen_address": { 00:19:44.382 "trtype": "TCP", 00:19:44.382 "adrfam": "IPv4", 00:19:44.382 "traddr": "10.0.0.2", 00:19:44.382 "trsvcid": "4420" 00:19:44.382 }, 00:19:44.382 "peer_address": { 00:19:44.382 "trtype": "TCP", 00:19:44.382 "adrfam": "IPv4", 00:19:44.382 "traddr": "10.0.0.1", 00:19:44.382 "trsvcid": "47192" 00:19:44.382 }, 00:19:44.382 "auth": { 00:19:44.382 "state": "completed", 00:19:44.382 "digest": "sha256", 00:19:44.382 "dhgroup": "ffdhe3072" 00:19:44.382 } 00:19:44.382 } 00:19:44.382 ]' 00:19:44.382 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.640 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.898 18:51:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.850 18:51:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.108 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.366 00:19:46.366 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.366 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.366 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.623 { 00:19:46.623 "cntlid": 19, 00:19:46.623 "qid": 0, 00:19:46.623 "state": "enabled", 00:19:46.623 "thread": "nvmf_tgt_poll_group_000", 00:19:46.623 "listen_address": { 00:19:46.623 "trtype": "TCP", 00:19:46.623 "adrfam": "IPv4", 00:19:46.623 "traddr": "10.0.0.2", 00:19:46.623 "trsvcid": "4420" 00:19:46.623 }, 00:19:46.623 "peer_address": { 00:19:46.623 "trtype": "TCP", 00:19:46.623 "adrfam": "IPv4", 00:19:46.623 "traddr": "10.0.0.1", 00:19:46.623 "trsvcid": "47228" 00:19:46.623 }, 00:19:46.623 "auth": { 00:19:46.623 "state": "completed", 00:19:46.623 "digest": "sha256", 00:19:46.623 "dhgroup": "ffdhe3072" 00:19:46.623 } 00:19:46.623 } 00:19:46.623 ]' 00:19:46.623 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.881 18:51:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.139 18:51:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.074 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.331 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.332 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.589 00:19:48.589 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.589 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.589 18:51:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.847 { 00:19:48.847 "cntlid": 21, 00:19:48.847 "qid": 0, 00:19:48.847 "state": "enabled", 00:19:48.847 "thread": "nvmf_tgt_poll_group_000", 00:19:48.847 "listen_address": { 00:19:48.847 "trtype": "TCP", 00:19:48.847 "adrfam": "IPv4", 00:19:48.847 "traddr": "10.0.0.2", 00:19:48.847 "trsvcid": "4420" 00:19:48.847 }, 00:19:48.847 "peer_address": { 00:19:48.847 "trtype": "TCP", 00:19:48.847 "adrfam": "IPv4", 00:19:48.847 "traddr": "10.0.0.1", 00:19:48.847 "trsvcid": "47256" 00:19:48.847 }, 00:19:48.847 "auth": { 00:19:48.847 "state": "completed", 00:19:48.847 "digest": "sha256", 00:19:48.847 "dhgroup": "ffdhe3072" 00:19:48.847 } 00:19:48.847 } 00:19:48.847 ]' 00:19:48.847 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.105 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.364 18:51:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.297 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.555 18:51:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.813 00:19:50.813 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.813 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.813 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.071 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.071 { 00:19:51.071 "cntlid": 23, 00:19:51.071 "qid": 0, 00:19:51.071 "state": "enabled", 00:19:51.071 "thread": "nvmf_tgt_poll_group_000", 00:19:51.071 "listen_address": { 00:19:51.071 "trtype": "TCP", 00:19:51.071 "adrfam": "IPv4", 00:19:51.071 "traddr": "10.0.0.2", 00:19:51.072 "trsvcid": "4420" 00:19:51.072 }, 00:19:51.072 "peer_address": { 00:19:51.072 "trtype": "TCP", 00:19:51.072 "adrfam": "IPv4", 00:19:51.072 "traddr": "10.0.0.1", 00:19:51.072 "trsvcid": "47290" 00:19:51.072 }, 00:19:51.072 "auth": { 00:19:51.072 "state": "completed", 00:19:51.072 "digest": "sha256", 00:19:51.072 "dhgroup": "ffdhe3072" 00:19:51.072 } 00:19:51.072 } 00:19:51.072 ]' 00:19:51.072 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.329 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.587 18:51:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.520 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.778 18:51:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.341 00:19:53.341 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.341 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.341 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.599 { 00:19:53.599 "cntlid": 25, 00:19:53.599 "qid": 0, 00:19:53.599 "state": "enabled", 00:19:53.599 "thread": "nvmf_tgt_poll_group_000", 00:19:53.599 "listen_address": { 00:19:53.599 "trtype": "TCP", 00:19:53.599 "adrfam": "IPv4", 00:19:53.599 "traddr": "10.0.0.2", 00:19:53.599 "trsvcid": "4420" 00:19:53.599 }, 00:19:53.599 "peer_address": { 00:19:53.599 "trtype": "TCP", 00:19:53.599 "adrfam": "IPv4", 00:19:53.599 "traddr": "10.0.0.1", 00:19:53.599 "trsvcid": "60236" 00:19:53.599 }, 00:19:53.599 "auth": { 00:19:53.599 "state": "completed", 00:19:53.599 "digest": "sha256", 00:19:53.599 "dhgroup": "ffdhe4096" 00:19:53.599 } 00:19:53.599 } 00:19:53.599 ]' 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.599 18:51:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.856 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.790 18:51:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.048 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.613 00:19:55.613 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.613 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.613 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.871 { 00:19:55.871 "cntlid": 27, 00:19:55.871 "qid": 0, 00:19:55.871 "state": "enabled", 00:19:55.871 "thread": "nvmf_tgt_poll_group_000", 00:19:55.871 "listen_address": { 00:19:55.871 "trtype": "TCP", 00:19:55.871 "adrfam": "IPv4", 00:19:55.871 "traddr": "10.0.0.2", 00:19:55.871 "trsvcid": "4420" 00:19:55.871 }, 00:19:55.871 "peer_address": { 00:19:55.871 "trtype": "TCP", 00:19:55.871 "adrfam": "IPv4", 00:19:55.871 "traddr": "10.0.0.1", 00:19:55.871 "trsvcid": "60262" 00:19:55.871 }, 00:19:55.871 "auth": { 00:19:55.871 "state": "completed", 00:19:55.871 "digest": "sha256", 00:19:55.871 "dhgroup": "ffdhe4096" 00:19:55.871 } 00:19:55.871 } 00:19:55.871 ]' 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.871 18:51:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.871 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.871 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.871 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.129 18:51:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.063 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.321 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.887 00:19:57.887 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.887 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.887 18:51:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.144 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.144 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.144 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.144 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.145 { 00:19:58.145 "cntlid": 29, 00:19:58.145 "qid": 0, 00:19:58.145 "state": "enabled", 00:19:58.145 "thread": "nvmf_tgt_poll_group_000", 00:19:58.145 "listen_address": { 00:19:58.145 "trtype": "TCP", 00:19:58.145 "adrfam": "IPv4", 00:19:58.145 "traddr": "10.0.0.2", 00:19:58.145 "trsvcid": "4420" 00:19:58.145 }, 00:19:58.145 "peer_address": { 00:19:58.145 "trtype": "TCP", 00:19:58.145 "adrfam": "IPv4", 00:19:58.145 "traddr": "10.0.0.1", 00:19:58.145 "trsvcid": "60292" 00:19:58.145 }, 00:19:58.145 "auth": { 00:19:58.145 "state": "completed", 00:19:58.145 "digest": "sha256", 00:19:58.145 "dhgroup": "ffdhe4096" 00:19:58.145 } 00:19:58.145 } 00:19:58.145 ]' 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.145 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.403 18:51:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.336 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 18:51:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.594 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.594 18:51:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.162 00:20:00.162 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.162 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.162 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.451 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.451 { 00:20:00.451 "cntlid": 31, 00:20:00.451 "qid": 0, 00:20:00.451 "state": "enabled", 00:20:00.451 "thread": "nvmf_tgt_poll_group_000", 00:20:00.452 "listen_address": { 00:20:00.452 "trtype": "TCP", 00:20:00.452 "adrfam": "IPv4", 00:20:00.452 "traddr": "10.0.0.2", 00:20:00.452 "trsvcid": "4420" 00:20:00.452 }, 00:20:00.452 "peer_address": { 00:20:00.452 "trtype": "TCP", 00:20:00.452 "adrfam": "IPv4", 00:20:00.452 "traddr": "10.0.0.1", 00:20:00.452 "trsvcid": "60336" 00:20:00.452 }, 00:20:00.452 "auth": { 00:20:00.452 "state": "completed", 00:20:00.452 "digest": "sha256", 00:20:00.452 "dhgroup": "ffdhe4096" 00:20:00.452 } 00:20:00.452 } 00:20:00.452 ]' 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.452 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.709 18:51:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:01.641 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.641 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.641 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.642 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.899 18:51:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.463 00:20:02.463 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.463 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.463 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.721 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.721 { 00:20:02.721 "cntlid": 33, 00:20:02.721 "qid": 0, 00:20:02.721 "state": "enabled", 00:20:02.721 "thread": "nvmf_tgt_poll_group_000", 00:20:02.721 "listen_address": { 00:20:02.721 "trtype": "TCP", 00:20:02.721 "adrfam": "IPv4", 00:20:02.721 "traddr": "10.0.0.2", 00:20:02.721 "trsvcid": "4420" 00:20:02.721 }, 00:20:02.721 "peer_address": { 00:20:02.721 "trtype": "TCP", 00:20:02.721 "adrfam": "IPv4", 00:20:02.722 "traddr": "10.0.0.1", 00:20:02.722 "trsvcid": "60358" 00:20:02.722 }, 00:20:02.722 "auth": { 00:20:02.722 "state": "completed", 00:20:02.722 "digest": "sha256", 00:20:02.722 "dhgroup": "ffdhe6144" 00:20:02.722 } 00:20:02.722 } 00:20:02.722 ]' 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.722 18:51:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.980 18:51:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.352 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.916 00:20:04.916 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.916 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.916 18:51:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.183 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.184 { 00:20:05.184 "cntlid": 35, 00:20:05.184 "qid": 0, 00:20:05.184 "state": "enabled", 00:20:05.184 "thread": "nvmf_tgt_poll_group_000", 00:20:05.184 "listen_address": { 00:20:05.184 "trtype": "TCP", 00:20:05.184 "adrfam": "IPv4", 00:20:05.184 "traddr": "10.0.0.2", 00:20:05.184 "trsvcid": "4420" 00:20:05.184 }, 00:20:05.184 "peer_address": { 00:20:05.184 "trtype": "TCP", 00:20:05.184 "adrfam": "IPv4", 00:20:05.184 "traddr": "10.0.0.1", 00:20:05.184 "trsvcid": "51540" 00:20:05.184 }, 00:20:05.184 "auth": { 00:20:05.184 "state": "completed", 00:20:05.184 "digest": "sha256", 00:20:05.184 "dhgroup": "ffdhe6144" 00:20:05.184 } 00:20:05.184 } 00:20:05.184 ]' 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.184 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.441 18:51:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.370 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.627 18:51:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.191 00:20:07.191 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.191 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.191 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.757 { 00:20:07.757 "cntlid": 37, 00:20:07.757 "qid": 0, 00:20:07.757 "state": "enabled", 00:20:07.757 "thread": "nvmf_tgt_poll_group_000", 00:20:07.757 "listen_address": { 00:20:07.757 "trtype": "TCP", 00:20:07.757 "adrfam": "IPv4", 00:20:07.757 "traddr": "10.0.0.2", 00:20:07.757 "trsvcid": "4420" 00:20:07.757 }, 00:20:07.757 "peer_address": { 00:20:07.757 "trtype": "TCP", 00:20:07.757 "adrfam": "IPv4", 00:20:07.757 "traddr": "10.0.0.1", 00:20:07.757 "trsvcid": "51570" 00:20:07.757 }, 00:20:07.757 "auth": { 00:20:07.757 "state": "completed", 00:20:07.757 "digest": "sha256", 00:20:07.757 "dhgroup": "ffdhe6144" 00:20:07.757 } 00:20:07.757 } 00:20:07.757 ]' 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.757 18:51:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.015 18:51:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.945 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.203 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.767 00:20:09.767 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.767 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.767 18:51:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.024 { 00:20:10.024 "cntlid": 39, 00:20:10.024 "qid": 0, 00:20:10.024 "state": "enabled", 00:20:10.024 "thread": "nvmf_tgt_poll_group_000", 00:20:10.024 "listen_address": { 00:20:10.024 "trtype": "TCP", 00:20:10.024 "adrfam": "IPv4", 00:20:10.024 "traddr": "10.0.0.2", 00:20:10.024 "trsvcid": "4420" 00:20:10.024 }, 00:20:10.024 "peer_address": { 00:20:10.024 "trtype": "TCP", 00:20:10.024 "adrfam": "IPv4", 00:20:10.024 "traddr": "10.0.0.1", 00:20:10.024 "trsvcid": "51592" 00:20:10.024 }, 00:20:10.024 "auth": { 00:20:10.024 "state": "completed", 00:20:10.024 "digest": "sha256", 00:20:10.024 "dhgroup": "ffdhe6144" 00:20:10.024 } 00:20:10.024 } 00:20:10.024 ]' 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.024 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.282 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.282 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.282 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.282 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.540 18:51:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.473 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.731 18:51:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.663 00:20:12.663 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.663 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.663 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.949 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.949 { 00:20:12.949 "cntlid": 41, 00:20:12.949 "qid": 0, 00:20:12.949 "state": "enabled", 00:20:12.949 "thread": "nvmf_tgt_poll_group_000", 00:20:12.949 "listen_address": { 00:20:12.949 "trtype": "TCP", 00:20:12.949 "adrfam": "IPv4", 00:20:12.949 "traddr": "10.0.0.2", 00:20:12.949 "trsvcid": "4420" 00:20:12.949 }, 00:20:12.949 "peer_address": { 00:20:12.949 "trtype": "TCP", 00:20:12.949 "adrfam": "IPv4", 00:20:12.949 "traddr": "10.0.0.1", 00:20:12.949 "trsvcid": "51630" 00:20:12.949 }, 00:20:12.949 "auth": { 00:20:12.949 "state": "completed", 00:20:12.950 "digest": "sha256", 00:20:12.950 "dhgroup": "ffdhe8192" 00:20:12.950 } 00:20:12.950 } 00:20:12.950 ]' 00:20:12.950 18:52:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.950 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.208 18:52:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.141 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.399 18:52:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.332 00:20:15.332 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.332 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.332 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.588 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.588 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.588 18:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.589 { 00:20:15.589 "cntlid": 43, 00:20:15.589 "qid": 0, 00:20:15.589 "state": "enabled", 00:20:15.589 "thread": "nvmf_tgt_poll_group_000", 00:20:15.589 "listen_address": { 00:20:15.589 "trtype": "TCP", 00:20:15.589 "adrfam": "IPv4", 00:20:15.589 "traddr": "10.0.0.2", 00:20:15.589 "trsvcid": "4420" 00:20:15.589 }, 00:20:15.589 "peer_address": { 00:20:15.589 "trtype": "TCP", 00:20:15.589 "adrfam": "IPv4", 00:20:15.589 "traddr": "10.0.0.1", 00:20:15.589 "trsvcid": "41960" 00:20:15.589 }, 00:20:15.589 "auth": { 00:20:15.589 "state": "completed", 00:20:15.589 "digest": "sha256", 00:20:15.589 "dhgroup": "ffdhe8192" 00:20:15.589 } 00:20:15.589 } 00:20:15.589 ]' 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.589 18:52:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.846 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.779 18:52:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.344 18:52:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.278 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.278 { 00:20:18.278 "cntlid": 45, 00:20:18.278 "qid": 0, 00:20:18.278 "state": "enabled", 00:20:18.278 "thread": "nvmf_tgt_poll_group_000", 00:20:18.278 "listen_address": { 00:20:18.278 "trtype": "TCP", 00:20:18.278 "adrfam": "IPv4", 00:20:18.278 "traddr": "10.0.0.2", 00:20:18.278 "trsvcid": "4420" 00:20:18.278 }, 00:20:18.278 "peer_address": { 00:20:18.278 "trtype": "TCP", 00:20:18.278 "adrfam": "IPv4", 00:20:18.278 "traddr": "10.0.0.1", 00:20:18.278 "trsvcid": "41986" 00:20:18.278 }, 00:20:18.278 "auth": { 00:20:18.278 "state": "completed", 00:20:18.278 "digest": "sha256", 00:20:18.278 "dhgroup": "ffdhe8192" 00:20:18.278 } 00:20:18.278 } 00:20:18.278 ]' 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.278 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.539 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.539 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.539 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.539 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.539 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.834 18:52:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.767 18:52:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.025 18:52:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.960 00:20:20.960 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.960 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.960 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.218 { 00:20:21.218 "cntlid": 47, 00:20:21.218 "qid": 0, 00:20:21.218 "state": "enabled", 00:20:21.218 "thread": "nvmf_tgt_poll_group_000", 00:20:21.218 "listen_address": { 00:20:21.218 "trtype": "TCP", 00:20:21.218 "adrfam": "IPv4", 00:20:21.218 "traddr": "10.0.0.2", 00:20:21.218 "trsvcid": "4420" 00:20:21.218 }, 00:20:21.218 "peer_address": { 00:20:21.218 "trtype": "TCP", 00:20:21.218 "adrfam": "IPv4", 00:20:21.218 "traddr": "10.0.0.1", 00:20:21.218 "trsvcid": "42020" 00:20:21.218 }, 00:20:21.218 "auth": { 00:20:21.218 "state": "completed", 00:20:21.218 "digest": "sha256", 00:20:21.218 "dhgroup": "ffdhe8192" 00:20:21.218 } 00:20:21.218 } 00:20:21.218 ]' 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.218 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.476 18:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.862 18:52:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.120 00:20:23.120 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.120 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.120 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.378 { 00:20:23.378 "cntlid": 49, 00:20:23.378 "qid": 0, 00:20:23.378 "state": "enabled", 00:20:23.378 "thread": "nvmf_tgt_poll_group_000", 00:20:23.378 "listen_address": { 00:20:23.378 "trtype": "TCP", 00:20:23.378 "adrfam": "IPv4", 00:20:23.378 "traddr": "10.0.0.2", 00:20:23.378 "trsvcid": "4420" 00:20:23.378 }, 00:20:23.378 "peer_address": { 00:20:23.378 "trtype": "TCP", 00:20:23.378 "adrfam": "IPv4", 00:20:23.378 "traddr": "10.0.0.1", 00:20:23.378 "trsvcid": "42692" 00:20:23.378 }, 00:20:23.378 "auth": { 00:20:23.378 "state": "completed", 00:20:23.378 "digest": "sha384", 00:20:23.378 "dhgroup": "null" 00:20:23.378 } 00:20:23.378 } 00:20:23.378 ]' 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.378 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.635 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.635 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.635 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.893 18:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.825 18:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.083 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.340 00:20:25.340 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.340 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.340 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.597 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.598 { 00:20:25.598 "cntlid": 51, 00:20:25.598 "qid": 0, 00:20:25.598 "state": "enabled", 00:20:25.598 "thread": "nvmf_tgt_poll_group_000", 00:20:25.598 "listen_address": { 00:20:25.598 "trtype": "TCP", 00:20:25.598 "adrfam": "IPv4", 00:20:25.598 "traddr": "10.0.0.2", 00:20:25.598 "trsvcid": "4420" 00:20:25.598 }, 00:20:25.598 "peer_address": { 00:20:25.598 "trtype": "TCP", 00:20:25.598 "adrfam": "IPv4", 00:20:25.598 "traddr": "10.0.0.1", 00:20:25.598 "trsvcid": "42716" 00:20:25.598 }, 00:20:25.598 "auth": { 00:20:25.598 "state": "completed", 00:20:25.598 "digest": "sha384", 00:20:25.598 "dhgroup": "null" 00:20:25.598 } 00:20:25.598 } 00:20:25.598 ]' 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.598 18:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.856 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.788 18:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.045 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.610 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.610 { 00:20:27.610 "cntlid": 53, 00:20:27.610 "qid": 0, 00:20:27.610 "state": "enabled", 00:20:27.610 "thread": "nvmf_tgt_poll_group_000", 00:20:27.610 "listen_address": { 00:20:27.610 "trtype": "TCP", 00:20:27.610 "adrfam": "IPv4", 00:20:27.610 "traddr": "10.0.0.2", 00:20:27.610 "trsvcid": "4420" 00:20:27.610 }, 00:20:27.610 "peer_address": { 00:20:27.610 "trtype": "TCP", 00:20:27.610 "adrfam": "IPv4", 00:20:27.610 "traddr": "10.0.0.1", 00:20:27.610 "trsvcid": "42750" 00:20:27.610 }, 00:20:27.610 "auth": { 00:20:27.610 "state": "completed", 00:20:27.610 "digest": "sha384", 00:20:27.610 "dhgroup": "null" 00:20:27.610 } 00:20:27.610 } 00:20:27.610 ]' 00:20:27.610 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.867 18:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.125 18:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.056 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.315 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.880 00:20:29.880 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.880 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.880 18:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.880 { 00:20:29.880 "cntlid": 55, 00:20:29.880 "qid": 0, 00:20:29.880 "state": "enabled", 00:20:29.880 "thread": "nvmf_tgt_poll_group_000", 00:20:29.880 "listen_address": { 00:20:29.880 "trtype": "TCP", 00:20:29.880 "adrfam": "IPv4", 00:20:29.880 "traddr": "10.0.0.2", 00:20:29.880 "trsvcid": "4420" 00:20:29.880 }, 00:20:29.880 "peer_address": { 00:20:29.880 "trtype": "TCP", 00:20:29.880 "adrfam": "IPv4", 00:20:29.880 "traddr": "10.0.0.1", 00:20:29.880 "trsvcid": "42778" 00:20:29.880 }, 00:20:29.880 "auth": { 00:20:29.880 "state": "completed", 00:20:29.880 "digest": "sha384", 00:20:29.880 "dhgroup": "null" 00:20:29.880 } 00:20:29.880 } 00:20:29.880 ]' 00:20:29.880 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.138 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.408 18:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.341 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.599 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:31.599 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.599 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.600 18:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.166 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.166 { 00:20:32.166 "cntlid": 57, 00:20:32.166 "qid": 0, 00:20:32.166 "state": "enabled", 00:20:32.166 "thread": "nvmf_tgt_poll_group_000", 00:20:32.166 "listen_address": { 00:20:32.166 "trtype": "TCP", 00:20:32.166 "adrfam": "IPv4", 00:20:32.166 "traddr": "10.0.0.2", 00:20:32.166 "trsvcid": "4420" 00:20:32.166 }, 00:20:32.166 "peer_address": { 00:20:32.166 "trtype": "TCP", 00:20:32.166 "adrfam": "IPv4", 00:20:32.166 "traddr": "10.0.0.1", 00:20:32.166 "trsvcid": "42806" 00:20:32.166 }, 00:20:32.166 "auth": { 00:20:32.166 "state": "completed", 00:20:32.166 "digest": "sha384", 00:20:32.166 "dhgroup": "ffdhe2048" 00:20:32.166 } 00:20:32.166 } 00:20:32.166 ]' 00:20:32.166 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.424 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.682 18:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.614 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.871 18:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.129 00:20:34.129 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.129 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.129 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.387 { 00:20:34.387 "cntlid": 59, 00:20:34.387 "qid": 0, 00:20:34.387 "state": "enabled", 00:20:34.387 "thread": "nvmf_tgt_poll_group_000", 00:20:34.387 "listen_address": { 00:20:34.387 "trtype": "TCP", 00:20:34.387 "adrfam": "IPv4", 00:20:34.387 "traddr": "10.0.0.2", 00:20:34.387 "trsvcid": "4420" 00:20:34.387 }, 00:20:34.387 "peer_address": { 00:20:34.387 "trtype": "TCP", 00:20:34.387 "adrfam": "IPv4", 00:20:34.387 "traddr": "10.0.0.1", 00:20:34.387 "trsvcid": "38416" 00:20:34.387 }, 00:20:34.387 "auth": { 00:20:34.387 "state": "completed", 00:20:34.387 "digest": "sha384", 00:20:34.387 "dhgroup": "ffdhe2048" 00:20:34.387 } 00:20:34.387 } 00:20:34.387 ]' 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.387 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.646 18:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.578 18:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.835 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.436 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.436 { 00:20:36.436 "cntlid": 61, 00:20:36.436 "qid": 0, 00:20:36.436 "state": "enabled", 00:20:36.436 "thread": "nvmf_tgt_poll_group_000", 00:20:36.436 "listen_address": { 00:20:36.436 "trtype": "TCP", 00:20:36.436 "adrfam": "IPv4", 00:20:36.436 "traddr": "10.0.0.2", 00:20:36.436 "trsvcid": "4420" 00:20:36.436 }, 00:20:36.436 "peer_address": { 00:20:36.436 "trtype": "TCP", 00:20:36.436 "adrfam": "IPv4", 00:20:36.436 "traddr": "10.0.0.1", 00:20:36.436 "trsvcid": "38448" 00:20:36.436 }, 00:20:36.436 "auth": { 00:20:36.436 "state": "completed", 00:20:36.436 "digest": "sha384", 00:20:36.436 "dhgroup": "ffdhe2048" 00:20:36.436 } 00:20:36.436 } 00:20:36.436 ]' 00:20:36.436 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.693 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.950 18:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.882 18:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.139 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.397 00:20:38.397 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.397 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.397 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.654 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.654 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.654 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.654 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.654 18:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.655 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.655 { 00:20:38.655 "cntlid": 63, 00:20:38.655 "qid": 0, 00:20:38.655 "state": "enabled", 00:20:38.655 "thread": "nvmf_tgt_poll_group_000", 00:20:38.655 "listen_address": { 00:20:38.655 "trtype": "TCP", 00:20:38.655 "adrfam": "IPv4", 00:20:38.655 "traddr": "10.0.0.2", 00:20:38.655 "trsvcid": "4420" 00:20:38.655 }, 00:20:38.655 "peer_address": { 00:20:38.655 "trtype": "TCP", 00:20:38.655 "adrfam": "IPv4", 00:20:38.655 "traddr": "10.0.0.1", 00:20:38.655 "trsvcid": "38482" 00:20:38.655 }, 00:20:38.655 "auth": { 00:20:38.655 "state": "completed", 00:20:38.655 "digest": "sha384", 00:20:38.655 "dhgroup": "ffdhe2048" 00:20:38.655 } 00:20:38.655 } 00:20:38.655 ]' 00:20:38.655 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.655 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.655 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.912 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.912 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.912 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.912 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.912 18:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.169 18:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.102 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.361 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.620 00:20:40.620 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.620 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.620 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.879 { 00:20:40.879 "cntlid": 65, 00:20:40.879 "qid": 0, 00:20:40.879 "state": "enabled", 00:20:40.879 "thread": "nvmf_tgt_poll_group_000", 00:20:40.879 "listen_address": { 00:20:40.879 "trtype": "TCP", 00:20:40.879 "adrfam": "IPv4", 00:20:40.879 "traddr": "10.0.0.2", 00:20:40.879 "trsvcid": "4420" 00:20:40.879 }, 00:20:40.879 "peer_address": { 00:20:40.879 "trtype": "TCP", 00:20:40.879 "adrfam": "IPv4", 00:20:40.879 "traddr": "10.0.0.1", 00:20:40.879 "trsvcid": "38506" 00:20:40.879 }, 00:20:40.879 "auth": { 00:20:40.879 "state": "completed", 00:20:40.879 "digest": "sha384", 00:20:40.879 "dhgroup": "ffdhe3072" 00:20:40.879 } 00:20:40.879 } 00:20:40.879 ]' 00:20:40.879 18:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.879 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.138 18:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.514 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.771 00:20:43.028 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.028 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.028 18:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.284 { 00:20:43.284 "cntlid": 67, 00:20:43.284 "qid": 0, 00:20:43.284 "state": "enabled", 00:20:43.284 "thread": "nvmf_tgt_poll_group_000", 00:20:43.284 "listen_address": { 00:20:43.284 "trtype": "TCP", 00:20:43.284 "adrfam": "IPv4", 00:20:43.284 "traddr": "10.0.0.2", 00:20:43.284 "trsvcid": "4420" 00:20:43.284 }, 00:20:43.284 "peer_address": { 00:20:43.284 "trtype": "TCP", 00:20:43.284 "adrfam": "IPv4", 00:20:43.284 "traddr": "10.0.0.1", 00:20:43.284 "trsvcid": "38524" 00:20:43.284 }, 00:20:43.284 "auth": { 00:20:43.284 "state": "completed", 00:20:43.284 "digest": "sha384", 00:20:43.284 "dhgroup": "ffdhe3072" 00:20:43.284 } 00:20:43.284 } 00:20:43.284 ]' 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.284 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.285 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:43.285 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.285 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.285 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.285 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.542 18:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.495 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.496 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.757 18:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.014 00:20:45.014 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.014 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.014 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.273 { 00:20:45.273 "cntlid": 69, 00:20:45.273 "qid": 0, 00:20:45.273 "state": "enabled", 00:20:45.273 "thread": "nvmf_tgt_poll_group_000", 00:20:45.273 "listen_address": { 00:20:45.273 "trtype": "TCP", 00:20:45.273 "adrfam": "IPv4", 00:20:45.273 "traddr": "10.0.0.2", 00:20:45.273 "trsvcid": "4420" 00:20:45.273 }, 00:20:45.273 "peer_address": { 00:20:45.273 "trtype": "TCP", 00:20:45.273 "adrfam": "IPv4", 00:20:45.273 "traddr": "10.0.0.1", 00:20:45.273 "trsvcid": "49042" 00:20:45.273 }, 00:20:45.273 "auth": { 00:20:45.273 "state": "completed", 00:20:45.273 "digest": "sha384", 00:20:45.273 "dhgroup": "ffdhe3072" 00:20:45.273 } 00:20:45.273 } 00:20:45.273 ]' 00:20:45.273 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.530 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.786 18:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.717 18:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.975 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.540 00:20:47.540 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.540 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.540 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.540 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.811 { 00:20:47.811 "cntlid": 71, 00:20:47.811 "qid": 0, 00:20:47.811 "state": "enabled", 00:20:47.811 "thread": "nvmf_tgt_poll_group_000", 00:20:47.811 "listen_address": { 00:20:47.811 "trtype": "TCP", 00:20:47.811 "adrfam": "IPv4", 00:20:47.811 "traddr": "10.0.0.2", 00:20:47.811 "trsvcid": "4420" 00:20:47.811 }, 00:20:47.811 "peer_address": { 00:20:47.811 "trtype": "TCP", 00:20:47.811 "adrfam": "IPv4", 00:20:47.811 "traddr": "10.0.0.1", 00:20:47.811 "trsvcid": "49062" 00:20:47.811 }, 00:20:47.811 "auth": { 00:20:47.811 "state": "completed", 00:20:47.811 "digest": "sha384", 00:20:47.811 "dhgroup": "ffdhe3072" 00:20:47.811 } 00:20:47.811 } 00:20:47.811 ]' 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.811 18:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.075 18:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.006 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.007 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.007 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.007 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.264 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.828 00:20:49.828 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.828 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.828 18:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.086 { 00:20:50.086 "cntlid": 73, 00:20:50.086 "qid": 0, 00:20:50.086 "state": "enabled", 00:20:50.086 "thread": "nvmf_tgt_poll_group_000", 00:20:50.086 "listen_address": { 00:20:50.086 "trtype": "TCP", 00:20:50.086 "adrfam": "IPv4", 00:20:50.086 "traddr": "10.0.0.2", 00:20:50.086 "trsvcid": "4420" 00:20:50.086 }, 00:20:50.086 "peer_address": { 00:20:50.086 "trtype": "TCP", 00:20:50.086 "adrfam": "IPv4", 00:20:50.086 "traddr": "10.0.0.1", 00:20:50.086 "trsvcid": "49094" 00:20:50.086 }, 00:20:50.086 "auth": { 00:20:50.086 "state": "completed", 00:20:50.086 "digest": "sha384", 00:20:50.086 "dhgroup": "ffdhe4096" 00:20:50.086 } 00:20:50.086 } 00:20:50.086 ]' 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.086 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.344 18:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.276 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.534 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.791 18:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.791 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.791 18:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.049 00:20:52.049 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.049 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.049 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.306 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.306 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.306 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.306 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.307 18:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.307 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.307 { 00:20:52.307 "cntlid": 75, 00:20:52.307 "qid": 0, 00:20:52.307 "state": "enabled", 00:20:52.307 "thread": "nvmf_tgt_poll_group_000", 00:20:52.307 "listen_address": { 00:20:52.307 "trtype": "TCP", 00:20:52.307 "adrfam": "IPv4", 00:20:52.307 "traddr": "10.0.0.2", 00:20:52.307 "trsvcid": "4420" 00:20:52.307 }, 00:20:52.307 "peer_address": { 00:20:52.307 "trtype": "TCP", 00:20:52.307 "adrfam": "IPv4", 00:20:52.307 "traddr": "10.0.0.1", 00:20:52.307 "trsvcid": "49114" 00:20:52.307 }, 00:20:52.307 "auth": { 00:20:52.307 "state": "completed", 00:20:52.307 "digest": "sha384", 00:20:52.307 "dhgroup": "ffdhe4096" 00:20:52.307 } 00:20:52.307 } 00:20:52.307 ]' 00:20:52.307 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.564 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.821 18:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.755 18:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.057 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:54.057 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.057 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.057 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:54.057 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.058 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.337 00:20:54.337 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.337 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.337 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.594 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.594 { 00:20:54.594 "cntlid": 77, 00:20:54.594 "qid": 0, 00:20:54.594 "state": "enabled", 00:20:54.594 "thread": "nvmf_tgt_poll_group_000", 00:20:54.594 "listen_address": { 00:20:54.594 "trtype": "TCP", 00:20:54.594 "adrfam": "IPv4", 00:20:54.594 "traddr": "10.0.0.2", 00:20:54.594 "trsvcid": "4420" 00:20:54.594 }, 00:20:54.594 "peer_address": { 00:20:54.594 "trtype": "TCP", 00:20:54.595 "adrfam": "IPv4", 00:20:54.595 "traddr": "10.0.0.1", 00:20:54.595 "trsvcid": "59920" 00:20:54.595 }, 00:20:54.595 "auth": { 00:20:54.595 "state": "completed", 00:20:54.595 "digest": "sha384", 00:20:54.595 "dhgroup": "ffdhe4096" 00:20:54.595 } 00:20:54.595 } 00:20:54.595 ]' 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.595 18:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.852 18:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.223 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.481 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.739 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.996 18:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.996 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.996 { 00:20:56.996 "cntlid": 79, 00:20:56.996 "qid": 0, 00:20:56.996 "state": "enabled", 00:20:56.996 "thread": "nvmf_tgt_poll_group_000", 00:20:56.996 "listen_address": { 00:20:56.996 "trtype": "TCP", 00:20:56.996 "adrfam": "IPv4", 00:20:56.996 "traddr": "10.0.0.2", 00:20:56.996 "trsvcid": "4420" 00:20:56.996 }, 00:20:56.996 "peer_address": { 00:20:56.996 "trtype": "TCP", 00:20:56.996 "adrfam": "IPv4", 00:20:56.996 "traddr": "10.0.0.1", 00:20:56.996 "trsvcid": "59942" 00:20:56.996 }, 00:20:56.996 "auth": { 00:20:56.996 "state": "completed", 00:20:56.996 "digest": "sha384", 00:20:56.996 "dhgroup": "ffdhe4096" 00:20:56.996 } 00:20:56.996 } 00:20:56.996 ]' 00:20:56.996 18:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.996 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.254 18:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.186 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.443 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.444 18:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.444 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.444 18:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.008 00:20:59.008 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.008 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.008 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.265 { 00:20:59.265 "cntlid": 81, 00:20:59.265 "qid": 0, 00:20:59.265 "state": "enabled", 00:20:59.265 "thread": "nvmf_tgt_poll_group_000", 00:20:59.265 "listen_address": { 00:20:59.265 "trtype": "TCP", 00:20:59.265 "adrfam": "IPv4", 00:20:59.265 "traddr": "10.0.0.2", 00:20:59.265 "trsvcid": "4420" 00:20:59.265 }, 00:20:59.265 "peer_address": { 00:20:59.265 "trtype": "TCP", 00:20:59.265 "adrfam": "IPv4", 00:20:59.265 "traddr": "10.0.0.1", 00:20:59.265 "trsvcid": "59964" 00:20:59.265 }, 00:20:59.265 "auth": { 00:20:59.265 "state": "completed", 00:20:59.265 "digest": "sha384", 00:20:59.265 "dhgroup": "ffdhe6144" 00:20:59.265 } 00:20:59.265 } 00:20:59.265 ]' 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.265 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.522 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.522 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.522 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.780 18:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.712 18:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.969 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:00.969 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.969 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.970 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.534 00:21:01.534 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.534 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.534 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.792 { 00:21:01.792 "cntlid": 83, 00:21:01.792 "qid": 0, 00:21:01.792 "state": "enabled", 00:21:01.792 "thread": "nvmf_tgt_poll_group_000", 00:21:01.792 "listen_address": { 00:21:01.792 "trtype": "TCP", 00:21:01.792 "adrfam": "IPv4", 00:21:01.792 "traddr": "10.0.0.2", 00:21:01.792 "trsvcid": "4420" 00:21:01.792 }, 00:21:01.792 "peer_address": { 00:21:01.792 "trtype": "TCP", 00:21:01.792 "adrfam": "IPv4", 00:21:01.792 "traddr": "10.0.0.1", 00:21:01.792 "trsvcid": "60002" 00:21:01.792 }, 00:21:01.792 "auth": { 00:21:01.792 "state": "completed", 00:21:01.792 "digest": "sha384", 00:21:01.792 "dhgroup": "ffdhe6144" 00:21:01.792 } 00:21:01.792 } 00:21:01.792 ]' 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.792 18:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.793 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.793 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.793 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.050 18:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.421 18:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.986 00:21:03.986 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.986 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.986 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.243 { 00:21:04.243 "cntlid": 85, 00:21:04.243 "qid": 0, 00:21:04.243 "state": "enabled", 00:21:04.243 "thread": "nvmf_tgt_poll_group_000", 00:21:04.243 "listen_address": { 00:21:04.243 "trtype": "TCP", 00:21:04.243 "adrfam": "IPv4", 00:21:04.243 "traddr": "10.0.0.2", 00:21:04.243 "trsvcid": "4420" 00:21:04.243 }, 00:21:04.243 "peer_address": { 00:21:04.243 "trtype": "TCP", 00:21:04.243 "adrfam": "IPv4", 00:21:04.243 "traddr": "10.0.0.1", 00:21:04.243 "trsvcid": "49860" 00:21:04.243 }, 00:21:04.243 "auth": { 00:21:04.243 "state": "completed", 00:21:04.243 "digest": "sha384", 00:21:04.243 "dhgroup": "ffdhe6144" 00:21:04.243 } 00:21:04.243 } 00:21:04.243 ]' 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.243 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.500 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.500 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.500 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.500 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.500 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.758 18:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.688 18:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.945 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.508 00:21:06.508 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.508 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.508 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.766 { 00:21:06.766 "cntlid": 87, 00:21:06.766 "qid": 0, 00:21:06.766 "state": "enabled", 00:21:06.766 "thread": "nvmf_tgt_poll_group_000", 00:21:06.766 "listen_address": { 00:21:06.766 "trtype": "TCP", 00:21:06.766 "adrfam": "IPv4", 00:21:06.766 "traddr": "10.0.0.2", 00:21:06.766 "trsvcid": "4420" 00:21:06.766 }, 00:21:06.766 "peer_address": { 00:21:06.766 "trtype": "TCP", 00:21:06.766 "adrfam": "IPv4", 00:21:06.766 "traddr": "10.0.0.1", 00:21:06.766 "trsvcid": "49892" 00:21:06.766 }, 00:21:06.766 "auth": { 00:21:06.766 "state": "completed", 00:21:06.766 "digest": "sha384", 00:21:06.766 "dhgroup": "ffdhe6144" 00:21:06.766 } 00:21:06.766 } 00:21:06.766 ]' 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.766 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.023 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.023 18:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.023 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.023 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.023 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.281 18:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.212 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.470 18:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.402 00:21:09.402 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.402 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.402 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.660 { 00:21:09.660 "cntlid": 89, 00:21:09.660 "qid": 0, 00:21:09.660 "state": "enabled", 00:21:09.660 "thread": "nvmf_tgt_poll_group_000", 00:21:09.660 "listen_address": { 00:21:09.660 "trtype": "TCP", 00:21:09.660 "adrfam": "IPv4", 00:21:09.660 "traddr": "10.0.0.2", 00:21:09.660 "trsvcid": "4420" 00:21:09.660 }, 00:21:09.660 "peer_address": { 00:21:09.660 "trtype": "TCP", 00:21:09.660 "adrfam": "IPv4", 00:21:09.660 "traddr": "10.0.0.1", 00:21:09.660 "trsvcid": "49916" 00:21:09.660 }, 00:21:09.660 "auth": { 00:21:09.660 "state": "completed", 00:21:09.660 "digest": "sha384", 00:21:09.660 "dhgroup": "ffdhe8192" 00:21:09.660 } 00:21:09.660 } 00:21:09.660 ]' 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.660 18:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.918 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.865 18:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.122 18:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.055 00:21:12.055 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.055 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.055 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.318 { 00:21:12.318 "cntlid": 91, 00:21:12.318 "qid": 0, 00:21:12.318 "state": "enabled", 00:21:12.318 "thread": "nvmf_tgt_poll_group_000", 00:21:12.318 "listen_address": { 00:21:12.318 "trtype": "TCP", 00:21:12.318 "adrfam": "IPv4", 00:21:12.318 "traddr": "10.0.0.2", 00:21:12.318 "trsvcid": "4420" 00:21:12.318 }, 00:21:12.318 "peer_address": { 00:21:12.318 "trtype": "TCP", 00:21:12.318 "adrfam": "IPv4", 00:21:12.318 "traddr": "10.0.0.1", 00:21:12.318 "trsvcid": "49946" 00:21:12.318 }, 00:21:12.318 "auth": { 00:21:12.318 "state": "completed", 00:21:12.318 "digest": "sha384", 00:21:12.318 "dhgroup": "ffdhe8192" 00:21:12.318 } 00:21:12.318 } 00:21:12.318 ]' 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.318 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.576 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.576 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.576 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.833 18:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:13.766 18:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.024 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.956 00:21:14.956 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.956 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.956 18:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.213 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.213 { 00:21:15.213 "cntlid": 93, 00:21:15.213 "qid": 0, 00:21:15.213 "state": "enabled", 00:21:15.213 "thread": "nvmf_tgt_poll_group_000", 00:21:15.213 "listen_address": { 00:21:15.213 "trtype": "TCP", 00:21:15.213 "adrfam": "IPv4", 00:21:15.213 "traddr": "10.0.0.2", 00:21:15.213 "trsvcid": "4420" 00:21:15.213 }, 00:21:15.213 "peer_address": { 00:21:15.214 "trtype": "TCP", 00:21:15.214 "adrfam": "IPv4", 00:21:15.214 "traddr": "10.0.0.1", 00:21:15.214 "trsvcid": "48204" 00:21:15.214 }, 00:21:15.214 "auth": { 00:21:15.214 "state": "completed", 00:21:15.214 "digest": "sha384", 00:21:15.214 "dhgroup": "ffdhe8192" 00:21:15.214 } 00:21:15.214 } 00:21:15.214 ]' 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.214 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.471 18:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.402 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:16.659 18:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.591 00:21:17.591 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.591 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.591 18:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.848 { 00:21:17.848 "cntlid": 95, 00:21:17.848 "qid": 0, 00:21:17.848 "state": "enabled", 00:21:17.848 "thread": "nvmf_tgt_poll_group_000", 00:21:17.848 "listen_address": { 00:21:17.848 "trtype": "TCP", 00:21:17.848 "adrfam": "IPv4", 00:21:17.848 "traddr": "10.0.0.2", 00:21:17.848 "trsvcid": "4420" 00:21:17.848 }, 00:21:17.848 "peer_address": { 00:21:17.848 "trtype": "TCP", 00:21:17.848 "adrfam": "IPv4", 00:21:17.848 "traddr": "10.0.0.1", 00:21:17.848 "trsvcid": "48216" 00:21:17.848 }, 00:21:17.848 "auth": { 00:21:17.848 "state": "completed", 00:21:17.848 "digest": "sha384", 00:21:17.848 "dhgroup": "ffdhe8192" 00:21:17.848 } 00:21:17.848 } 00:21:17.848 ]' 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.848 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.105 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.105 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.105 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.105 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.105 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.361 18:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.292 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.549 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.807 00:21:19.807 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.807 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.807 18:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.065 { 00:21:20.065 "cntlid": 97, 00:21:20.065 "qid": 0, 00:21:20.065 "state": "enabled", 00:21:20.065 "thread": "nvmf_tgt_poll_group_000", 00:21:20.065 "listen_address": { 00:21:20.065 "trtype": "TCP", 00:21:20.065 "adrfam": "IPv4", 00:21:20.065 "traddr": "10.0.0.2", 00:21:20.065 "trsvcid": "4420" 00:21:20.065 }, 00:21:20.065 "peer_address": { 00:21:20.065 "trtype": "TCP", 00:21:20.065 "adrfam": "IPv4", 00:21:20.065 "traddr": "10.0.0.1", 00:21:20.065 "trsvcid": "48248" 00:21:20.065 }, 00:21:20.065 "auth": { 00:21:20.065 "state": "completed", 00:21:20.065 "digest": "sha512", 00:21:20.065 "dhgroup": "null" 00:21:20.065 } 00:21:20.065 } 00:21:20.065 ]' 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.065 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.322 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.322 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.322 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.322 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.322 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.580 18:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.513 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.770 18:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.028 00:21:22.028 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.028 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.028 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.285 { 00:21:22.285 "cntlid": 99, 00:21:22.285 "qid": 0, 00:21:22.285 "state": "enabled", 00:21:22.285 "thread": "nvmf_tgt_poll_group_000", 00:21:22.285 "listen_address": { 00:21:22.285 "trtype": "TCP", 00:21:22.285 "adrfam": "IPv4", 00:21:22.285 "traddr": "10.0.0.2", 00:21:22.285 "trsvcid": "4420" 00:21:22.285 }, 00:21:22.285 "peer_address": { 00:21:22.285 "trtype": "TCP", 00:21:22.285 "adrfam": "IPv4", 00:21:22.285 "traddr": "10.0.0.1", 00:21:22.285 "trsvcid": "48268" 00:21:22.285 }, 00:21:22.285 "auth": { 00:21:22.285 "state": "completed", 00:21:22.285 "digest": "sha512", 00:21:22.285 "dhgroup": "null" 00:21:22.285 } 00:21:22.285 } 00:21:22.285 ]' 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.285 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.543 18:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.918 18:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.176 00:21:24.176 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.176 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.176 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.434 { 00:21:24.434 "cntlid": 101, 00:21:24.434 "qid": 0, 00:21:24.434 "state": "enabled", 00:21:24.434 "thread": "nvmf_tgt_poll_group_000", 00:21:24.434 "listen_address": { 00:21:24.434 "trtype": "TCP", 00:21:24.434 "adrfam": "IPv4", 00:21:24.434 "traddr": "10.0.0.2", 00:21:24.434 "trsvcid": "4420" 00:21:24.434 }, 00:21:24.434 "peer_address": { 00:21:24.434 "trtype": "TCP", 00:21:24.434 "adrfam": "IPv4", 00:21:24.434 "traddr": "10.0.0.1", 00:21:24.434 "trsvcid": "43682" 00:21:24.434 }, 00:21:24.434 "auth": { 00:21:24.434 "state": "completed", 00:21:24.434 "digest": "sha512", 00:21:24.434 "dhgroup": "null" 00:21:24.434 } 00:21:24.434 } 00:21:24.434 ]' 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:24.434 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.692 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.692 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.692 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.949 18:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.881 18:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.139 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.397 00:21:26.397 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.397 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.397 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.655 { 00:21:26.655 "cntlid": 103, 00:21:26.655 "qid": 0, 00:21:26.655 "state": "enabled", 00:21:26.655 "thread": "nvmf_tgt_poll_group_000", 00:21:26.655 "listen_address": { 00:21:26.655 "trtype": "TCP", 00:21:26.655 "adrfam": "IPv4", 00:21:26.655 "traddr": "10.0.0.2", 00:21:26.655 "trsvcid": "4420" 00:21:26.655 }, 00:21:26.655 "peer_address": { 00:21:26.655 "trtype": "TCP", 00:21:26.655 "adrfam": "IPv4", 00:21:26.655 "traddr": "10.0.0.1", 00:21:26.655 "trsvcid": "43716" 00:21:26.655 }, 00:21:26.655 "auth": { 00:21:26.655 "state": "completed", 00:21:26.655 "digest": "sha512", 00:21:26.655 "dhgroup": "null" 00:21:26.655 } 00:21:26.655 } 00:21:26.655 ]' 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.655 18:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.914 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:27.847 18:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.847 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.105 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:28.105 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.105 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.105 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.105 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.106 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.363 00:21:28.363 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.363 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.363 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.622 { 00:21:28.622 "cntlid": 105, 00:21:28.622 "qid": 0, 00:21:28.622 "state": "enabled", 00:21:28.622 "thread": "nvmf_tgt_poll_group_000", 00:21:28.622 "listen_address": { 00:21:28.622 "trtype": "TCP", 00:21:28.622 "adrfam": "IPv4", 00:21:28.622 "traddr": "10.0.0.2", 00:21:28.622 "trsvcid": "4420" 00:21:28.622 }, 00:21:28.622 "peer_address": { 00:21:28.622 "trtype": "TCP", 00:21:28.622 "adrfam": "IPv4", 00:21:28.622 "traddr": "10.0.0.1", 00:21:28.622 "trsvcid": "43742" 00:21:28.622 }, 00:21:28.622 "auth": { 00:21:28.622 "state": "completed", 00:21:28.622 "digest": "sha512", 00:21:28.622 "dhgroup": "ffdhe2048" 00:21:28.622 } 00:21:28.622 } 00:21:28.622 ]' 00:21:28.622 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.879 18:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.136 18:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.077 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.364 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.622 00:21:30.622 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.622 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.622 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.880 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.880 18:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.880 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.880 18:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.880 { 00:21:30.880 "cntlid": 107, 00:21:30.880 "qid": 0, 00:21:30.880 "state": "enabled", 00:21:30.880 "thread": "nvmf_tgt_poll_group_000", 00:21:30.880 "listen_address": { 00:21:30.880 "trtype": "TCP", 00:21:30.880 "adrfam": "IPv4", 00:21:30.880 "traddr": "10.0.0.2", 00:21:30.880 "trsvcid": "4420" 00:21:30.880 }, 00:21:30.880 "peer_address": { 00:21:30.880 "trtype": "TCP", 00:21:30.880 "adrfam": "IPv4", 00:21:30.880 "traddr": "10.0.0.1", 00:21:30.880 "trsvcid": "43758" 00:21:30.880 }, 00:21:30.880 "auth": { 00:21:30.880 "state": "completed", 00:21:30.880 "digest": "sha512", 00:21:30.880 "dhgroup": "ffdhe2048" 00:21:30.880 } 00:21:30.880 } 00:21:30.880 ]' 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.880 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.138 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.138 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.138 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.396 18:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.331 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.589 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.847 00:21:32.847 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.847 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.847 18:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.105 { 00:21:33.105 "cntlid": 109, 00:21:33.105 "qid": 0, 00:21:33.105 "state": "enabled", 00:21:33.105 "thread": "nvmf_tgt_poll_group_000", 00:21:33.105 "listen_address": { 00:21:33.105 "trtype": "TCP", 00:21:33.105 "adrfam": "IPv4", 00:21:33.105 "traddr": "10.0.0.2", 00:21:33.105 "trsvcid": "4420" 00:21:33.105 }, 00:21:33.105 "peer_address": { 00:21:33.105 "trtype": "TCP", 00:21:33.105 "adrfam": "IPv4", 00:21:33.105 "traddr": "10.0.0.1", 00:21:33.105 "trsvcid": "43788" 00:21:33.105 }, 00:21:33.105 "auth": { 00:21:33.105 "state": "completed", 00:21:33.105 "digest": "sha512", 00:21:33.105 "dhgroup": "ffdhe2048" 00:21:33.105 } 00:21:33.105 } 00:21:33.105 ]' 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.105 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.362 18:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:34.294 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.294 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.294 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.294 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.552 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.552 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.552 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.552 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.809 18:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.067 00:21:35.067 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.067 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.067 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.324 { 00:21:35.324 "cntlid": 111, 00:21:35.324 "qid": 0, 00:21:35.324 "state": "enabled", 00:21:35.324 "thread": "nvmf_tgt_poll_group_000", 00:21:35.324 "listen_address": { 00:21:35.324 "trtype": "TCP", 00:21:35.324 "adrfam": "IPv4", 00:21:35.324 "traddr": "10.0.0.2", 00:21:35.324 "trsvcid": "4420" 00:21:35.324 }, 00:21:35.324 "peer_address": { 00:21:35.324 "trtype": "TCP", 00:21:35.324 "adrfam": "IPv4", 00:21:35.324 "traddr": "10.0.0.1", 00:21:35.324 "trsvcid": "57214" 00:21:35.324 }, 00:21:35.324 "auth": { 00:21:35.324 "state": "completed", 00:21:35.324 "digest": "sha512", 00:21:35.324 "dhgroup": "ffdhe2048" 00:21:35.324 } 00:21:35.324 } 00:21:35.324 ]' 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.324 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.581 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.581 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.581 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.838 18:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.770 18:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.027 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:37.027 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.027 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.027 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.028 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.285 00:21:37.285 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.285 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.285 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.543 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.543 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.543 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.543 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.801 { 00:21:37.801 "cntlid": 113, 00:21:37.801 "qid": 0, 00:21:37.801 "state": "enabled", 00:21:37.801 "thread": "nvmf_tgt_poll_group_000", 00:21:37.801 "listen_address": { 00:21:37.801 "trtype": "TCP", 00:21:37.801 "adrfam": "IPv4", 00:21:37.801 "traddr": "10.0.0.2", 00:21:37.801 "trsvcid": "4420" 00:21:37.801 }, 00:21:37.801 "peer_address": { 00:21:37.801 "trtype": "TCP", 00:21:37.801 "adrfam": "IPv4", 00:21:37.801 "traddr": "10.0.0.1", 00:21:37.801 "trsvcid": "57240" 00:21:37.801 }, 00:21:37.801 "auth": { 00:21:37.801 "state": "completed", 00:21:37.801 "digest": "sha512", 00:21:37.801 "dhgroup": "ffdhe3072" 00:21:37.801 } 00:21:37.801 } 00:21:37.801 ]' 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.801 18:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.060 18:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.992 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.250 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.815 00:21:39.815 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.815 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.815 18:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.815 { 00:21:39.815 "cntlid": 115, 00:21:39.815 "qid": 0, 00:21:39.815 "state": "enabled", 00:21:39.815 "thread": "nvmf_tgt_poll_group_000", 00:21:39.815 "listen_address": { 00:21:39.815 "trtype": "TCP", 00:21:39.815 "adrfam": "IPv4", 00:21:39.815 "traddr": "10.0.0.2", 00:21:39.815 "trsvcid": "4420" 00:21:39.815 }, 00:21:39.815 "peer_address": { 00:21:39.815 "trtype": "TCP", 00:21:39.815 "adrfam": "IPv4", 00:21:39.815 "traddr": "10.0.0.1", 00:21:39.815 "trsvcid": "57268" 00:21:39.815 }, 00:21:39.815 "auth": { 00:21:39.815 "state": "completed", 00:21:39.815 "digest": "sha512", 00:21:39.815 "dhgroup": "ffdhe3072" 00:21:39.815 } 00:21:39.815 } 00:21:39.815 ]' 00:21:39.815 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.072 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.330 18:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.262 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.519 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.777 00:21:41.777 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.777 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.777 18:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.035 { 00:21:42.035 "cntlid": 117, 00:21:42.035 "qid": 0, 00:21:42.035 "state": "enabled", 00:21:42.035 "thread": "nvmf_tgt_poll_group_000", 00:21:42.035 "listen_address": { 00:21:42.035 "trtype": "TCP", 00:21:42.035 "adrfam": "IPv4", 00:21:42.035 "traddr": "10.0.0.2", 00:21:42.035 "trsvcid": "4420" 00:21:42.035 }, 00:21:42.035 "peer_address": { 00:21:42.035 "trtype": "TCP", 00:21:42.035 "adrfam": "IPv4", 00:21:42.035 "traddr": "10.0.0.1", 00:21:42.035 "trsvcid": "57292" 00:21:42.035 }, 00:21:42.035 "auth": { 00:21:42.035 "state": "completed", 00:21:42.035 "digest": "sha512", 00:21:42.035 "dhgroup": "ffdhe3072" 00:21:42.035 } 00:21:42.035 } 00:21:42.035 ]' 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.035 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.293 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.293 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.293 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.293 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.293 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.551 18:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.484 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.485 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.485 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:43.743 18:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.001 00:21:44.001 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.001 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.001 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.259 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.259 { 00:21:44.259 "cntlid": 119, 00:21:44.259 "qid": 0, 00:21:44.259 "state": "enabled", 00:21:44.259 "thread": "nvmf_tgt_poll_group_000", 00:21:44.259 "listen_address": { 00:21:44.259 "trtype": "TCP", 00:21:44.259 "adrfam": "IPv4", 00:21:44.259 "traddr": "10.0.0.2", 00:21:44.259 "trsvcid": "4420" 00:21:44.259 }, 00:21:44.259 "peer_address": { 00:21:44.259 "trtype": "TCP", 00:21:44.259 "adrfam": "IPv4", 00:21:44.259 "traddr": "10.0.0.1", 00:21:44.259 "trsvcid": "52684" 00:21:44.259 }, 00:21:44.259 "auth": { 00:21:44.259 "state": "completed", 00:21:44.259 "digest": "sha512", 00:21:44.260 "dhgroup": "ffdhe3072" 00:21:44.260 } 00:21:44.260 } 00:21:44.260 ]' 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.260 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.519 18:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.896 18:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.896 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.460 00:21:46.460 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.460 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.460 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.718 { 00:21:46.718 "cntlid": 121, 00:21:46.718 "qid": 0, 00:21:46.718 "state": "enabled", 00:21:46.718 "thread": "nvmf_tgt_poll_group_000", 00:21:46.718 "listen_address": { 00:21:46.718 "trtype": "TCP", 00:21:46.718 "adrfam": "IPv4", 00:21:46.718 "traddr": "10.0.0.2", 00:21:46.718 "trsvcid": "4420" 00:21:46.718 }, 00:21:46.718 "peer_address": { 00:21:46.718 "trtype": "TCP", 00:21:46.718 "adrfam": "IPv4", 00:21:46.718 "traddr": "10.0.0.1", 00:21:46.718 "trsvcid": "52716" 00:21:46.718 }, 00:21:46.718 "auth": { 00:21:46.718 "state": "completed", 00:21:46.718 "digest": "sha512", 00:21:46.718 "dhgroup": "ffdhe4096" 00:21:46.718 } 00:21:46.718 } 00:21:46.718 ]' 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.718 18:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.975 18:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.959 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.217 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.783 00:21:48.783 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.783 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.783 18:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.042 { 00:21:49.042 "cntlid": 123, 00:21:49.042 "qid": 0, 00:21:49.042 "state": "enabled", 00:21:49.042 "thread": "nvmf_tgt_poll_group_000", 00:21:49.042 "listen_address": { 00:21:49.042 "trtype": "TCP", 00:21:49.042 "adrfam": "IPv4", 00:21:49.042 "traddr": "10.0.0.2", 00:21:49.042 "trsvcid": "4420" 00:21:49.042 }, 00:21:49.042 "peer_address": { 00:21:49.042 "trtype": "TCP", 00:21:49.042 "adrfam": "IPv4", 00:21:49.042 "traddr": "10.0.0.1", 00:21:49.042 "trsvcid": "52732" 00:21:49.042 }, 00:21:49.042 "auth": { 00:21:49.042 "state": "completed", 00:21:49.042 "digest": "sha512", 00:21:49.042 "dhgroup": "ffdhe4096" 00:21:49.042 } 00:21:49.042 } 00:21:49.042 ]' 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.042 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.300 18:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.235 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.493 18:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.059 00:21:51.059 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.059 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.059 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.317 { 00:21:51.317 "cntlid": 125, 00:21:51.317 "qid": 0, 00:21:51.317 "state": "enabled", 00:21:51.317 "thread": "nvmf_tgt_poll_group_000", 00:21:51.317 "listen_address": { 00:21:51.317 "trtype": "TCP", 00:21:51.317 "adrfam": "IPv4", 00:21:51.317 "traddr": "10.0.0.2", 00:21:51.317 "trsvcid": "4420" 00:21:51.317 }, 00:21:51.317 "peer_address": { 00:21:51.317 "trtype": "TCP", 00:21:51.317 "adrfam": "IPv4", 00:21:51.317 "traddr": "10.0.0.1", 00:21:51.317 "trsvcid": "52768" 00:21:51.317 }, 00:21:51.317 "auth": { 00:21:51.317 "state": "completed", 00:21:51.317 "digest": "sha512", 00:21:51.317 "dhgroup": "ffdhe4096" 00:21:51.317 } 00:21:51.317 } 00:21:51.317 ]' 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.317 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.575 18:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:52.509 18:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.075 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.332 00:21:53.332 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:53.332 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.332 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.589 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.589 { 00:21:53.589 "cntlid": 127, 00:21:53.589 "qid": 0, 00:21:53.589 "state": "enabled", 00:21:53.589 "thread": "nvmf_tgt_poll_group_000", 00:21:53.589 "listen_address": { 00:21:53.589 "trtype": "TCP", 00:21:53.589 "adrfam": "IPv4", 00:21:53.590 "traddr": "10.0.0.2", 00:21:53.590 "trsvcid": "4420" 00:21:53.590 }, 00:21:53.590 "peer_address": { 00:21:53.590 "trtype": "TCP", 00:21:53.590 "adrfam": "IPv4", 00:21:53.590 "traddr": "10.0.0.1", 00:21:53.590 "trsvcid": "54130" 00:21:53.590 }, 00:21:53.590 "auth": { 00:21:53.590 "state": "completed", 00:21:53.590 "digest": "sha512", 00:21:53.590 "dhgroup": "ffdhe4096" 00:21:53.590 } 00:21:53.590 } 00:21:53.590 ]' 00:21:53.590 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.590 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.590 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.590 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.590 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.847 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.847 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.847 18:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.847 18:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.223 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.787 00:21:55.787 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.787 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.787 18:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.044 { 00:21:56.044 "cntlid": 129, 00:21:56.044 "qid": 0, 00:21:56.044 "state": "enabled", 00:21:56.044 "thread": "nvmf_tgt_poll_group_000", 00:21:56.044 "listen_address": { 00:21:56.044 "trtype": "TCP", 00:21:56.044 "adrfam": "IPv4", 00:21:56.044 "traddr": "10.0.0.2", 00:21:56.044 "trsvcid": "4420" 00:21:56.044 }, 00:21:56.044 "peer_address": { 00:21:56.044 "trtype": "TCP", 00:21:56.044 "adrfam": "IPv4", 00:21:56.044 "traddr": "10.0.0.1", 00:21:56.044 "trsvcid": "54150" 00:21:56.044 }, 00:21:56.044 "auth": { 00:21:56.044 "state": "completed", 00:21:56.044 "digest": "sha512", 00:21:56.044 "dhgroup": "ffdhe6144" 00:21:56.044 } 00:21:56.044 } 00:21:56.044 ]' 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.044 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.302 18:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.670 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.671 18:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.236 00:21:58.236 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.236 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.236 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.494 { 00:21:58.494 "cntlid": 131, 00:21:58.494 "qid": 0, 00:21:58.494 "state": "enabled", 00:21:58.494 "thread": "nvmf_tgt_poll_group_000", 00:21:58.494 "listen_address": { 00:21:58.494 "trtype": "TCP", 00:21:58.494 "adrfam": "IPv4", 00:21:58.494 "traddr": "10.0.0.2", 00:21:58.494 "trsvcid": "4420" 00:21:58.494 }, 00:21:58.494 "peer_address": { 00:21:58.494 "trtype": "TCP", 00:21:58.494 "adrfam": "IPv4", 00:21:58.494 "traddr": "10.0.0.1", 00:21:58.494 "trsvcid": "54172" 00:21:58.494 }, 00:21:58.494 "auth": { 00:21:58.494 "state": "completed", 00:21:58.494 "digest": "sha512", 00:21:58.494 "dhgroup": "ffdhe6144" 00:21:58.494 } 00:21:58.494 } 00:21:58.494 ]' 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.494 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.752 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:58.752 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.752 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.752 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.752 18:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.010 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:21:59.943 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.943 18:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.943 18:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.943 18:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.943 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.943 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.943 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.943 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.201 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.769 00:22:00.769 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.769 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.769 18:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.027 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.027 { 00:22:01.027 "cntlid": 133, 00:22:01.027 "qid": 0, 00:22:01.027 "state": "enabled", 00:22:01.027 "thread": "nvmf_tgt_poll_group_000", 00:22:01.027 "listen_address": { 00:22:01.027 "trtype": "TCP", 00:22:01.027 "adrfam": "IPv4", 00:22:01.028 "traddr": "10.0.0.2", 00:22:01.028 "trsvcid": "4420" 00:22:01.028 }, 00:22:01.028 "peer_address": { 00:22:01.028 "trtype": "TCP", 00:22:01.028 "adrfam": "IPv4", 00:22:01.028 "traddr": "10.0.0.1", 00:22:01.028 "trsvcid": "54186" 00:22:01.028 }, 00:22:01.028 "auth": { 00:22:01.028 "state": "completed", 00:22:01.028 "digest": "sha512", 00:22:01.028 "dhgroup": "ffdhe6144" 00:22:01.028 } 00:22:01.028 } 00:22:01.028 ]' 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.028 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.287 18:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.667 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:02.668 18:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.236 00:22:03.236 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.236 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.236 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.494 { 00:22:03.494 "cntlid": 135, 00:22:03.494 "qid": 0, 00:22:03.494 "state": "enabled", 00:22:03.494 "thread": "nvmf_tgt_poll_group_000", 00:22:03.494 "listen_address": { 00:22:03.494 "trtype": "TCP", 00:22:03.494 "adrfam": "IPv4", 00:22:03.494 "traddr": "10.0.0.2", 00:22:03.494 "trsvcid": "4420" 00:22:03.494 }, 00:22:03.494 "peer_address": { 00:22:03.494 "trtype": "TCP", 00:22:03.494 "adrfam": "IPv4", 00:22:03.494 "traddr": "10.0.0.1", 00:22:03.494 "trsvcid": "36168" 00:22:03.494 }, 00:22:03.494 "auth": { 00:22:03.494 "state": "completed", 00:22:03.494 "digest": "sha512", 00:22:03.494 "dhgroup": "ffdhe6144" 00:22:03.494 } 00:22:03.494 } 00:22:03.494 ]' 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.494 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.752 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.752 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.752 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.752 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.752 18:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.011 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.947 18:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:05.204 18:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.142 00:22:06.142 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.142 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.142 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.402 { 00:22:06.402 "cntlid": 137, 00:22:06.402 "qid": 0, 00:22:06.402 "state": "enabled", 00:22:06.402 "thread": "nvmf_tgt_poll_group_000", 00:22:06.402 "listen_address": { 00:22:06.402 "trtype": "TCP", 00:22:06.402 "adrfam": "IPv4", 00:22:06.402 "traddr": "10.0.0.2", 00:22:06.402 "trsvcid": "4420" 00:22:06.402 }, 00:22:06.402 "peer_address": { 00:22:06.402 "trtype": "TCP", 00:22:06.402 "adrfam": "IPv4", 00:22:06.402 "traddr": "10.0.0.1", 00:22:06.402 "trsvcid": "36198" 00:22:06.402 }, 00:22:06.402 "auth": { 00:22:06.402 "state": "completed", 00:22:06.402 "digest": "sha512", 00:22:06.402 "dhgroup": "ffdhe8192" 00:22:06.402 } 00:22:06.402 } 00:22:06.402 ]' 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.402 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.660 18:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.034 18:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.034 18:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.972 00:22:08.972 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.972 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.972 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.254 { 00:22:09.254 "cntlid": 139, 00:22:09.254 "qid": 0, 00:22:09.254 "state": "enabled", 00:22:09.254 "thread": "nvmf_tgt_poll_group_000", 00:22:09.254 "listen_address": { 00:22:09.254 "trtype": "TCP", 00:22:09.254 "adrfam": "IPv4", 00:22:09.254 "traddr": "10.0.0.2", 00:22:09.254 "trsvcid": "4420" 00:22:09.254 }, 00:22:09.254 "peer_address": { 00:22:09.254 "trtype": "TCP", 00:22:09.254 "adrfam": "IPv4", 00:22:09.254 "traddr": "10.0.0.1", 00:22:09.254 "trsvcid": "36224" 00:22:09.254 }, 00:22:09.254 "auth": { 00:22:09.254 "state": "completed", 00:22:09.254 "digest": "sha512", 00:22:09.254 "dhgroup": "ffdhe8192" 00:22:09.254 } 00:22:09.254 } 00:22:09.254 ]' 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.254 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.513 18:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTJlMWY0YmZhODVjNDBhODAyZGVmODNhNmIzZDMwNjc4+mIB: --dhchap-ctrl-secret DHHC-1:02:MGRmNGViNjJlNDY1YzAxZGM4MTk3ZGVjODk0Y2VkOWRiZGNkOGIzMDZjNWU1ZmMxFKU/eA==: 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.892 18:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.892 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.831 00:22:11.831 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.831 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.831 18:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.090 { 00:22:12.090 "cntlid": 141, 00:22:12.090 "qid": 0, 00:22:12.090 "state": "enabled", 00:22:12.090 "thread": "nvmf_tgt_poll_group_000", 00:22:12.090 "listen_address": { 00:22:12.090 "trtype": "TCP", 00:22:12.090 "adrfam": "IPv4", 00:22:12.090 "traddr": "10.0.0.2", 00:22:12.090 "trsvcid": "4420" 00:22:12.090 }, 00:22:12.090 "peer_address": { 00:22:12.090 "trtype": "TCP", 00:22:12.090 "adrfam": "IPv4", 00:22:12.090 "traddr": "10.0.0.1", 00:22:12.090 "trsvcid": "36258" 00:22:12.090 }, 00:22:12.090 "auth": { 00:22:12.090 "state": "completed", 00:22:12.090 "digest": "sha512", 00:22:12.090 "dhgroup": "ffdhe8192" 00:22:12.090 } 00:22:12.090 } 00:22:12.090 ]' 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.090 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.348 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.348 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.348 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.605 18:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGEyODliNDNhMzRmMTM3NTk5N2JkNDkwYTA1ODdkYjdhYzI4NjlmZjczNzA1ZjNh9iG7VQ==: --dhchap-ctrl-secret DHHC-1:01:NzM0ZjljNTdhZjE5Y2I0M2ExNWZmN2E5ZThlODViOTc7qrvo: 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.541 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.798 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:13.799 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.799 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.799 18:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.799 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.799 18:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.737 00:22:14.737 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.737 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.737 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.995 { 00:22:14.995 "cntlid": 143, 00:22:14.995 "qid": 0, 00:22:14.995 "state": "enabled", 00:22:14.995 "thread": "nvmf_tgt_poll_group_000", 00:22:14.995 "listen_address": { 00:22:14.995 "trtype": "TCP", 00:22:14.995 "adrfam": "IPv4", 00:22:14.995 "traddr": "10.0.0.2", 00:22:14.995 "trsvcid": "4420" 00:22:14.995 }, 00:22:14.995 "peer_address": { 00:22:14.995 "trtype": "TCP", 00:22:14.995 "adrfam": "IPv4", 00:22:14.995 "traddr": "10.0.0.1", 00:22:14.995 "trsvcid": "37072" 00:22:14.995 }, 00:22:14.995 "auth": { 00:22:14.995 "state": "completed", 00:22:14.995 "digest": "sha512", 00:22:14.995 "dhgroup": "ffdhe8192" 00:22:14.995 } 00:22:14.995 } 00:22:14.995 ]' 00:22:14.995 18:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.995 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.252 18:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:22:16.185 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.444 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.701 18:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.637 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.637 { 00:22:17.637 "cntlid": 145, 00:22:17.637 "qid": 0, 00:22:17.637 "state": "enabled", 00:22:17.637 "thread": "nvmf_tgt_poll_group_000", 00:22:17.637 "listen_address": { 00:22:17.637 "trtype": "TCP", 00:22:17.637 "adrfam": "IPv4", 00:22:17.637 "traddr": "10.0.0.2", 00:22:17.637 "trsvcid": "4420" 00:22:17.637 }, 00:22:17.637 "peer_address": { 00:22:17.637 "trtype": "TCP", 00:22:17.637 "adrfam": "IPv4", 00:22:17.637 "traddr": "10.0.0.1", 00:22:17.637 "trsvcid": "37098" 00:22:17.637 }, 00:22:17.637 "auth": { 00:22:17.637 "state": "completed", 00:22:17.637 "digest": "sha512", 00:22:17.637 "dhgroup": "ffdhe8192" 00:22:17.637 } 00:22:17.637 } 00:22:17.637 ]' 00:22:17.637 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.894 18:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.152 18:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZDAyNGQ2NTMyNDUyYzZmMDgzN2JjNDljODJlYWVhYzQ0YjZkMDg0ZDM1NzUwNjhk+B0PKA==: --dhchap-ctrl-secret DHHC-1:03:ZTYzOTQ4YmY1OGVhZDg1OGRhY2M0ZWQ1NGQ5ZmNmY2ViOGQxNGQ2MzYwMjU4NWVmMTUzNzI0NmM3Y2RjY2I3OMUyoPQ=: 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.087 18:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:20.020 request: 00:22:20.020 { 00:22:20.020 "name": "nvme0", 00:22:20.020 "trtype": "tcp", 00:22:20.020 "traddr": "10.0.0.2", 00:22:20.020 "adrfam": "ipv4", 00:22:20.020 "trsvcid": "4420", 00:22:20.020 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.020 "prchk_reftag": false, 00:22:20.020 "prchk_guard": false, 00:22:20.020 "hdgst": false, 00:22:20.020 "ddgst": false, 00:22:20.020 "dhchap_key": "key2", 00:22:20.020 "method": "bdev_nvme_attach_controller", 00:22:20.020 "req_id": 1 00:22:20.020 } 00:22:20.020 Got JSON-RPC error response 00:22:20.020 response: 00:22:20.020 { 00:22:20.020 "code": -5, 00:22:20.020 "message": "Input/output error" 00:22:20.020 } 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.020 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.021 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.954 request: 00:22:20.954 { 00:22:20.954 "name": "nvme0", 00:22:20.954 "trtype": "tcp", 00:22:20.954 "traddr": "10.0.0.2", 00:22:20.954 "adrfam": "ipv4", 00:22:20.954 "trsvcid": "4420", 00:22:20.954 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.954 "prchk_reftag": false, 00:22:20.954 "prchk_guard": false, 00:22:20.954 "hdgst": false, 00:22:20.954 "ddgst": false, 00:22:20.954 "dhchap_key": "key1", 00:22:20.954 "dhchap_ctrlr_key": "ckey2", 00:22:20.954 "method": "bdev_nvme_attach_controller", 00:22:20.954 "req_id": 1 00:22:20.954 } 00:22:20.954 Got JSON-RPC error response 00:22:20.954 response: 00:22:20.954 { 00:22:20.954 "code": -5, 00:22:20.954 "message": "Input/output error" 00:22:20.954 } 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.954 18:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.892 request: 00:22:21.892 { 00:22:21.892 "name": "nvme0", 00:22:21.892 "trtype": "tcp", 00:22:21.892 "traddr": "10.0.0.2", 00:22:21.892 "adrfam": "ipv4", 00:22:21.892 "trsvcid": "4420", 00:22:21.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.892 "prchk_reftag": false, 00:22:21.892 "prchk_guard": false, 00:22:21.892 "hdgst": false, 00:22:21.892 "ddgst": false, 00:22:21.892 "dhchap_key": "key1", 00:22:21.892 "dhchap_ctrlr_key": "ckey1", 00:22:21.892 "method": "bdev_nvme_attach_controller", 00:22:21.892 "req_id": 1 00:22:21.892 } 00:22:21.892 Got JSON-RPC error response 00:22:21.892 response: 00:22:21.892 { 00:22:21.892 "code": -5, 00:22:21.892 "message": "Input/output error" 00:22:21.892 } 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.892 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3605624 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3605624 ']' 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3605624 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3605624 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3605624' 00:22:21.893 killing process with pid 3605624 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3605624 00:22:21.893 18:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3605624 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3628291 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3628291 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3628291 ']' 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.893 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.151 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.151 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3628291 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3628291 ']' 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.152 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.411 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.411 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:22.411 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:22.411 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.411 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:22.668 18:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.604 00:22:23.604 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.604 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.604 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.862 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.862 { 00:22:23.862 "cntlid": 1, 00:22:23.862 "qid": 0, 00:22:23.862 "state": "enabled", 00:22:23.862 "thread": "nvmf_tgt_poll_group_000", 00:22:23.862 "listen_address": { 00:22:23.862 "trtype": "TCP", 00:22:23.862 "adrfam": "IPv4", 00:22:23.863 "traddr": "10.0.0.2", 00:22:23.863 "trsvcid": "4420" 00:22:23.863 }, 00:22:23.863 "peer_address": { 00:22:23.863 "trtype": "TCP", 00:22:23.863 "adrfam": "IPv4", 00:22:23.863 "traddr": "10.0.0.1", 00:22:23.863 "trsvcid": "36880" 00:22:23.863 }, 00:22:23.863 "auth": { 00:22:23.863 "state": "completed", 00:22:23.863 "digest": "sha512", 00:22:23.863 "dhgroup": "ffdhe8192" 00:22:23.863 } 00:22:23.863 } 00:22:23.863 ]' 00:22:23.863 18:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.863 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.863 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.863 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.863 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.121 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.121 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.121 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.380 18:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM5ODBmNjEwYTdjZTYyNjk0ZDU4NDg1NmNlZjEzZjA0ZjIxYmEzMzE1YzY3M2IyMTc3OWMzN2ZhNDEzMDg2MXc/maI=: 00:22:25.343 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:25.344 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.602 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.860 request: 00:22:25.860 { 00:22:25.860 "name": "nvme0", 00:22:25.860 "trtype": "tcp", 00:22:25.860 "traddr": "10.0.0.2", 00:22:25.860 "adrfam": "ipv4", 00:22:25.860 "trsvcid": "4420", 00:22:25.860 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:25.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:25.860 "prchk_reftag": false, 00:22:25.860 "prchk_guard": false, 00:22:25.860 "hdgst": false, 00:22:25.860 "ddgst": false, 00:22:25.860 "dhchap_key": "key3", 00:22:25.860 "method": "bdev_nvme_attach_controller", 00:22:25.860 "req_id": 1 00:22:25.860 } 00:22:25.860 Got JSON-RPC error response 00:22:25.860 response: 00:22:25.860 { 00:22:25.860 "code": -5, 00:22:25.860 "message": "Input/output error" 00:22:25.860 } 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:25.860 18:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.118 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.376 request: 00:22:26.376 { 00:22:26.376 "name": "nvme0", 00:22:26.376 "trtype": "tcp", 00:22:26.376 "traddr": "10.0.0.2", 00:22:26.376 "adrfam": "ipv4", 00:22:26.376 "trsvcid": "4420", 00:22:26.376 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.376 "prchk_reftag": false, 00:22:26.376 "prchk_guard": false, 00:22:26.376 "hdgst": false, 00:22:26.376 "ddgst": false, 00:22:26.376 "dhchap_key": "key3", 00:22:26.376 "method": "bdev_nvme_attach_controller", 00:22:26.376 "req_id": 1 00:22:26.376 } 00:22:26.376 Got JSON-RPC error response 00:22:26.376 response: 00:22:26.376 { 00:22:26.376 "code": -5, 00:22:26.376 "message": "Input/output error" 00:22:26.376 } 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.376 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.634 18:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:26.893 request: 00:22:26.893 { 00:22:26.893 "name": "nvme0", 00:22:26.893 "trtype": "tcp", 00:22:26.893 "traddr": "10.0.0.2", 00:22:26.893 "adrfam": "ipv4", 00:22:26.893 "trsvcid": "4420", 00:22:26.893 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.893 "prchk_reftag": false, 00:22:26.893 "prchk_guard": false, 00:22:26.893 "hdgst": false, 00:22:26.893 "ddgst": false, 00:22:26.893 "dhchap_key": "key0", 00:22:26.893 "dhchap_ctrlr_key": "key1", 00:22:26.893 "method": "bdev_nvme_attach_controller", 00:22:26.893 "req_id": 1 00:22:26.893 } 00:22:26.893 Got JSON-RPC error response 00:22:26.893 response: 00:22:26.893 { 00:22:26.893 "code": -5, 00:22:26.893 "message": "Input/output error" 00:22:26.893 } 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:26.893 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.462 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.462 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3605708 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3605708 ']' 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3605708 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3605708 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3605708' 00:22:28.032 killing process with pid 3605708 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3605708 00:22:28.032 18:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3605708 00:22:28.292 18:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.293 rmmod nvme_tcp 00:22:28.293 rmmod nvme_fabrics 00:22:28.293 rmmod nvme_keyring 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3628291 ']' 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3628291 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3628291 ']' 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3628291 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3628291 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3628291' 00:22:28.293 killing process with pid 3628291 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3628291 00:22:28.293 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3628291 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.552 18:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.092 18:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.092 18:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MVf /tmp/spdk.key-sha256.aLc /tmp/spdk.key-sha384.Dzu /tmp/spdk.key-sha512.rRs /tmp/spdk.key-sha512.SPw /tmp/spdk.key-sha384.rON /tmp/spdk.key-sha256.shF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:31.092 00:22:31.092 real 3m9.937s 00:22:31.092 user 7m22.656s 00:22:31.092 sys 0m24.930s 00:22:31.092 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.092 18:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.092 ************************************ 00:22:31.092 END TEST nvmf_auth_target 00:22:31.092 ************************************ 00:22:31.092 18:54:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.092 18:54:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:31.092 18:54:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:31.092 18:54:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:31.092 18:54:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.092 18:54:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.092 ************************************ 00:22:31.092 START TEST nvmf_bdevio_no_huge 00:22:31.092 ************************************ 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:31.092 * Looking for test storage... 00:22:31.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.092 18:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:32.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:32.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:32.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:32.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.998 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.999 18:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:32.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:22:32.999 00:22:32.999 --- 10.0.0.2 ping statistics --- 00:22:32.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.999 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:32.999 00:22:32.999 --- 10.0.0.1 ping statistics --- 00:22:32.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.999 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3631040 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3631040 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3631040 ']' 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.999 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 [2024-07-14 18:54:21.108706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:32.999 [2024-07-14 18:54:21.108807] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:32.999 [2024-07-14 18:54:21.180406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.258 [2024-07-14 18:54:21.270696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.258 [2024-07-14 18:54:21.270761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.258 [2024-07-14 18:54:21.270786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.258 [2024-07-14 18:54:21.270799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.258 [2024-07-14 18:54:21.270815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.258 [2024-07-14 18:54:21.270909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.258 [2024-07-14 18:54:21.270962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:33.258 [2024-07-14 18:54:21.271019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:33.258 [2024-07-14 18:54:21.271372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 [2024-07-14 18:54:21.387691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 Malloc0 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 [2024-07-14 18:54:21.425814] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:33.258 { 00:22:33.258 "params": { 00:22:33.258 "name": "Nvme$subsystem", 00:22:33.258 "trtype": "$TEST_TRANSPORT", 00:22:33.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.258 "adrfam": "ipv4", 00:22:33.258 "trsvcid": "$NVMF_PORT", 00:22:33.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.258 "hdgst": ${hdgst:-false}, 00:22:33.258 "ddgst": ${ddgst:-false} 00:22:33.258 }, 00:22:33.258 "method": "bdev_nvme_attach_controller" 00:22:33.258 } 00:22:33.258 EOF 00:22:33.258 )") 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:33.258 18:54:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:33.258 "params": { 00:22:33.258 "name": "Nvme1", 00:22:33.258 "trtype": "tcp", 00:22:33.258 "traddr": "10.0.0.2", 00:22:33.258 "adrfam": "ipv4", 00:22:33.258 "trsvcid": "4420", 00:22:33.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.258 "hdgst": false, 00:22:33.258 "ddgst": false 00:22:33.258 }, 00:22:33.258 "method": "bdev_nvme_attach_controller" 00:22:33.258 }' 00:22:33.258 [2024-07-14 18:54:21.471956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:33.258 [2024-07-14 18:54:21.472042] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3631116 ] 00:22:33.516 [2024-07-14 18:54:21.535576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.516 [2024-07-14 18:54:21.618138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.516 [2024-07-14 18:54:21.618188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.516 [2024-07-14 18:54:21.618192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.775 I/O targets: 00:22:33.775 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:33.775 00:22:33.775 00:22:33.775 CUnit - A unit testing framework for C - Version 2.1-3 00:22:33.775 http://cunit.sourceforge.net/ 00:22:33.775 00:22:33.775 00:22:33.775 Suite: bdevio tests on: Nvme1n1 00:22:33.775 Test: blockdev write read block ...passed 00:22:33.775 Test: blockdev write zeroes read block ...passed 00:22:33.775 Test: blockdev write zeroes read no split ...passed 00:22:34.033 Test: blockdev write zeroes read split ...passed 00:22:34.033 Test: blockdev write zeroes read split partial ...passed 00:22:34.033 Test: blockdev reset ...[2024-07-14 18:54:22.054240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.033 [2024-07-14 18:54:22.054347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2243b00 (9): Bad file descriptor 00:22:34.033 [2024-07-14 18:54:22.070108] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:34.033 passed 00:22:34.033 Test: blockdev write read 8 blocks ...passed 00:22:34.033 Test: blockdev write read size > 128k ...passed 00:22:34.033 Test: blockdev write read invalid size ...passed 00:22:34.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:34.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:34.033 Test: blockdev write read max offset ...passed 00:22:34.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:34.033 Test: blockdev writev readv 8 blocks ...passed 00:22:34.292 Test: blockdev writev readv 30 x 1block ...passed 00:22:34.292 Test: blockdev writev readv block ...passed 00:22:34.292 Test: blockdev writev readv size > 128k ...passed 00:22:34.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:34.292 Test: blockdev comparev and writev ...[2024-07-14 18:54:22.324229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.324265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.324289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.324679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.324706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.324724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.325067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.325091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.325112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.325129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.325473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.325510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.325532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.292 [2024-07-14 18:54:22.325548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:34.292 passed 00:22:34.292 Test: blockdev nvme passthru rw ...passed 00:22:34.292 Test: blockdev nvme passthru vendor specific ...[2024-07-14 18:54:22.408164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.292 [2024-07-14 18:54:22.408191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.408351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.292 [2024-07-14 18:54:22.408376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.408532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.292 [2024-07-14 18:54:22.408555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:34.292 [2024-07-14 18:54:22.408709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.292 [2024-07-14 18:54:22.408733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:34.292 passed 00:22:34.292 Test: blockdev nvme admin passthru ...passed 00:22:34.292 Test: blockdev copy ...passed 00:22:34.292 00:22:34.292 Run Summary: Type Total Ran Passed Failed Inactive 00:22:34.292 suites 1 1 n/a 0 0 00:22:34.292 tests 23 23 23 0 0 00:22:34.292 asserts 152 152 152 0 n/a 00:22:34.292 00:22:34.292 Elapsed time = 1.147 seconds 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:34.858 rmmod nvme_tcp 00:22:34.858 rmmod nvme_fabrics 00:22:34.858 rmmod nvme_keyring 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3631040 ']' 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3631040 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3631040 ']' 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3631040 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631040 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631040' 00:22:34.858 killing process with pid 3631040 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3631040 00:22:34.858 18:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3631040 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.119 18:54:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.647 18:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.647 00:22:37.647 real 0m6.500s 00:22:37.647 user 0m10.630s 00:22:37.647 sys 0m2.572s 00:22:37.647 18:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.647 18:54:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.647 ************************************ 00:22:37.647 END TEST nvmf_bdevio_no_huge 00:22:37.647 ************************************ 00:22:37.647 18:54:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.647 18:54:25 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.647 18:54:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.647 18:54:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.647 18:54:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.647 ************************************ 00:22:37.647 START TEST nvmf_tls 00:22:37.647 ************************************ 00:22:37.647 18:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.647 * Looking for test storage... 00:22:37.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.647 18:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.647 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.648 18:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.567 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:22:39.568 00:22:39.568 --- 10.0.0.2 ping statistics --- 00:22:39.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.568 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:39.568 00:22:39.568 --- 10.0.0.1 ping statistics --- 00:22:39.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.568 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3633246 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3633246 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3633246 ']' 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.568 [2024-07-14 18:54:27.572235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:39.568 [2024-07-14 18:54:27.572325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.568 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.568 [2024-07-14 18:54:27.643969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.568 [2024-07-14 18:54:27.734494] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.568 [2024-07-14 18:54:27.734554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.568 [2024-07-14 18:54:27.734582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.568 [2024-07-14 18:54:27.734596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.568 [2024-07-14 18:54:27.734608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.568 [2024-07-14 18:54:27.734648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.568 18:54:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.832 18:54:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.832 18:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:39.832 18:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:39.832 true 00:22:39.832 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:39.833 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:40.089 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:40.089 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:40.089 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:40.652 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.652 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:40.652 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:40.652 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:40.652 18:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:40.909 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.909 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:41.472 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:41.728 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:41.728 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.728 18:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:41.984 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:41.984 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:41.984 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.242 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.242 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aZ9W0PF70y 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.pB3r6MXD8H 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aZ9W0PF70y 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pB3r6MXD8H 00:22:42.806 18:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.064 18:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.322 18:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aZ9W0PF70y 00:22:43.322 18:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aZ9W0PF70y 00:22:43.322 18:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.579 [2024-07-14 18:54:31.727546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.579 18:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.836 18:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.094 [2024-07-14 18:54:32.277016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.094 [2024-07-14 18:54:32.277286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.094 18:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.353 malloc0 00:22:44.353 18:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.641 18:54:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZ9W0PF70y 00:22:44.899 [2024-07-14 18:54:33.058855] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:44.899 18:54:33 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aZ9W0PF70y 00:22:44.899 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.092 Initializing NVMe Controllers 00:22:57.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:57.092 Initialization complete. Launching workers. 00:22:57.092 ======================================================== 00:22:57.092 Latency(us) 00:22:57.092 Device Information : IOPS MiB/s Average min max 00:22:57.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7547.28 29.48 8482.12 1248.26 9284.58 00:22:57.092 ======================================================== 00:22:57.092 Total : 7547.28 29.48 8482.12 1248.26 9284.58 00:22:57.092 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZ9W0PF70y 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZ9W0PF70y' 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3635156 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3635156 /var/tmp/bdevperf.sock 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3635156 ']' 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.092 [2024-07-14 18:54:43.230778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:57.092 [2024-07-14 18:54:43.230871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635156 ] 00:22:57.092 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.092 [2024-07-14 18:54:43.289690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.092 [2024-07-14 18:54:43.375526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZ9W0PF70y 00:22:57.092 [2024-07-14 18:54:43.733764] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.092 [2024-07-14 18:54:43.733917] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:57.092 TLSTESTn1 00:22:57.092 18:54:43 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.092 Running I/O for 10 seconds... 00:23:07.058 00:23:07.058 Latency(us) 00:23:07.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.058 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.058 Verification LBA range: start 0x0 length 0x2000 00:23:07.058 TLSTESTn1 : 10.02 3084.57 12.05 0.00 0.00 41424.42 8398.32 61361.11 00:23:07.058 =================================================================================================================== 00:23:07.058 Total : 3084.57 12.05 0.00 0.00 41424.42 8398.32 61361.11 00:23:07.058 0 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3635156 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3635156 ']' 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3635156 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635156 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635156' 00:23:07.058 killing process with pid 3635156 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3635156 00:23:07.058 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.058 00:23:07.058 Latency(us) 00:23:07.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.058 =================================================================================================================== 00:23:07.058 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.058 [2024-07-14 18:54:54.031713] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3635156 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pB3r6MXD8H 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pB3r6MXD8H 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pB3r6MXD8H 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pB3r6MXD8H' 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3636350 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3636350 /var/tmp/bdevperf.sock 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3636350 ']' 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.058 [2024-07-14 18:54:54.277685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.058 [2024-07-14 18:54:54.277773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636350 ] 00:23:07.058 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.058 [2024-07-14 18:54:54.339458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.058 [2024-07-14 18:54:54.424545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.058 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pB3r6MXD8H 00:23:07.059 [2024-07-14 18:54:54.752315] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.059 [2024-07-14 18:54:54.752437] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.059 [2024-07-14 18:54:54.757779] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.059 [2024-07-14 18:54:54.758253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aabb0 (107): Transport endpoint is not connected 00:23:07.059 [2024-07-14 18:54:54.759242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aabb0 (9): Bad file descriptor 00:23:07.059 [2024-07-14 18:54:54.760240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.059 [2024-07-14 18:54:54.760261] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.059 [2024-07-14 18:54:54.760287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.059 request: 00:23:07.059 { 00:23:07.059 "name": "TLSTEST", 00:23:07.059 "trtype": "tcp", 00:23:07.059 "traddr": "10.0.0.2", 00:23:07.059 "adrfam": "ipv4", 00:23:07.059 "trsvcid": "4420", 00:23:07.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.059 "prchk_reftag": false, 00:23:07.059 "prchk_guard": false, 00:23:07.059 "hdgst": false, 00:23:07.059 "ddgst": false, 00:23:07.059 "psk": "/tmp/tmp.pB3r6MXD8H", 00:23:07.059 "method": "bdev_nvme_attach_controller", 00:23:07.059 "req_id": 1 00:23:07.059 } 00:23:07.059 Got JSON-RPC error response 00:23:07.059 response: 00:23:07.059 { 00:23:07.059 "code": -5, 00:23:07.059 "message": "Input/output error" 00:23:07.059 } 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3636350 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3636350 ']' 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3636350 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636350 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636350' 00:23:07.059 killing process with pid 3636350 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3636350 00:23:07.059 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.059 00:23:07.059 Latency(us) 00:23:07.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.059 =================================================================================================================== 00:23:07.059 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.059 [2024-07-14 18:54:54.803853] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.059 18:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3636350 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZ9W0PF70y 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZ9W0PF70y 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZ9W0PF70y 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZ9W0PF70y' 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3636481 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3636481 /var/tmp/bdevperf.sock 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3636481 ']' 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.059 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.059 [2024-07-14 18:54:55.064226] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.059 [2024-07-14 18:54:55.064317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636481 ] 00:23:07.059 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.059 [2024-07-14 18:54:55.123983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.059 [2024-07-14 18:54:55.212999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.317 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.317 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:07.317 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aZ9W0PF70y 00:23:07.575 [2024-07-14 18:54:55.543941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.575 [2024-07-14 18:54:55.544055] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.575 [2024-07-14 18:54:55.549426] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.575 [2024-07-14 18:54:55.549468] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.575 [2024-07-14 18:54:55.549536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.575 [2024-07-14 18:54:55.550004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf99bb0 (107): Transport endpoint is not connected 00:23:07.575 [2024-07-14 18:54:55.550991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf99bb0 (9): Bad file descriptor 00:23:07.575 [2024-07-14 18:54:55.551990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.575 [2024-07-14 18:54:55.552010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.575 [2024-07-14 18:54:55.552027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.575 request: 00:23:07.575 { 00:23:07.575 "name": "TLSTEST", 00:23:07.575 "trtype": "tcp", 00:23:07.575 "traddr": "10.0.0.2", 00:23:07.575 "adrfam": "ipv4", 00:23:07.575 "trsvcid": "4420", 00:23:07.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.575 "prchk_reftag": false, 00:23:07.575 "prchk_guard": false, 00:23:07.575 "hdgst": false, 00:23:07.575 "ddgst": false, 00:23:07.575 "psk": "/tmp/tmp.aZ9W0PF70y", 00:23:07.575 "method": "bdev_nvme_attach_controller", 00:23:07.575 "req_id": 1 00:23:07.575 } 00:23:07.575 Got JSON-RPC error response 00:23:07.575 response: 00:23:07.575 { 00:23:07.575 "code": -5, 00:23:07.575 "message": "Input/output error" 00:23:07.575 } 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3636481 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3636481 ']' 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3636481 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636481 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636481' 00:23:07.575 killing process with pid 3636481 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3636481 00:23:07.575 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.575 00:23:07.575 Latency(us) 00:23:07.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.575 =================================================================================================================== 00:23:07.575 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.575 [2024-07-14 18:54:55.601999] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.575 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3636481 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZ9W0PF70y 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZ9W0PF70y 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZ9W0PF70y 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZ9W0PF70y' 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3636621 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3636621 /var/tmp/bdevperf.sock 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3636621 ']' 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.833 18:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.833 [2024-07-14 18:54:55.856872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:07.833 [2024-07-14 18:54:55.856973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636621 ] 00:23:07.833 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.833 [2024-07-14 18:54:55.919665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.833 [2024-07-14 18:54:56.005174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.091 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.091 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.091 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZ9W0PF70y 00:23:08.349 [2024-07-14 18:54:56.376707] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.349 [2024-07-14 18:54:56.376808] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.349 [2024-07-14 18:54:56.381988] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.349 [2024-07-14 18:54:56.382024] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.349 [2024-07-14 18:54:56.382080] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.349 [2024-07-14 18:54:56.382589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2230bb0 (107): Transport endpoint is not connected 00:23:08.349 [2024-07-14 18:54:56.383578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2230bb0 (9): Bad file descriptor 00:23:08.349 [2024-07-14 18:54:56.384580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:08.350 [2024-07-14 18:54:56.384600] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.350 [2024-07-14 18:54:56.384616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:08.350 request: 00:23:08.350 { 00:23:08.350 "name": "TLSTEST", 00:23:08.350 "trtype": "tcp", 00:23:08.350 "traddr": "10.0.0.2", 00:23:08.350 "adrfam": "ipv4", 00:23:08.350 "trsvcid": "4420", 00:23:08.350 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.350 "prchk_reftag": false, 00:23:08.350 "prchk_guard": false, 00:23:08.350 "hdgst": false, 00:23:08.350 "ddgst": false, 00:23:08.350 "psk": "/tmp/tmp.aZ9W0PF70y", 00:23:08.350 "method": "bdev_nvme_attach_controller", 00:23:08.350 "req_id": 1 00:23:08.350 } 00:23:08.350 Got JSON-RPC error response 00:23:08.350 response: 00:23:08.350 { 00:23:08.350 "code": -5, 00:23:08.350 "message": "Input/output error" 00:23:08.350 } 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3636621 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3636621 ']' 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3636621 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636621 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636621' 00:23:08.350 killing process with pid 3636621 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3636621 00:23:08.350 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.350 00:23:08.350 Latency(us) 00:23:08.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.350 =================================================================================================================== 00:23:08.350 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.350 [2024-07-14 18:54:56.433200] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.350 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3636621 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3636645 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3636645 /var/tmp/bdevperf.sock 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3636645 ']' 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.608 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.608 [2024-07-14 18:54:56.700993] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:08.608 [2024-07-14 18:54:56.701072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636645 ] 00:23:08.608 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.608 [2024-07-14 18:54:56.763895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.867 [2024-07-14 18:54:56.849238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.867 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.867 18:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.867 18:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:09.125 [2024-07-14 18:54:57.231673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.125 [2024-07-14 18:54:57.233281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64e160 (9): Bad file descriptor 00:23:09.125 [2024-07-14 18:54:57.234275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.125 [2024-07-14 18:54:57.234295] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.125 [2024-07-14 18:54:57.234311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.125 request: 00:23:09.125 { 00:23:09.125 "name": "TLSTEST", 00:23:09.125 "trtype": "tcp", 00:23:09.125 "traddr": "10.0.0.2", 00:23:09.125 "adrfam": "ipv4", 00:23:09.125 "trsvcid": "4420", 00:23:09.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.125 "prchk_reftag": false, 00:23:09.125 "prchk_guard": false, 00:23:09.125 "hdgst": false, 00:23:09.125 "ddgst": false, 00:23:09.125 "method": "bdev_nvme_attach_controller", 00:23:09.125 "req_id": 1 00:23:09.125 } 00:23:09.125 Got JSON-RPC error response 00:23:09.125 response: 00:23:09.125 { 00:23:09.125 "code": -5, 00:23:09.125 "message": "Input/output error" 00:23:09.125 } 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3636645 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3636645 ']' 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3636645 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636645 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636645' 00:23:09.125 killing process with pid 3636645 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3636645 00:23:09.125 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.125 00:23:09.125 Latency(us) 00:23:09.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.125 =================================================================================================================== 00:23:09.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.125 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3636645 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3633246 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3633246 ']' 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3633246 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3633246 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3633246' 00:23:09.383 killing process with pid 3633246 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3633246 00:23:09.383 [2024-07-14 18:54:57.524303] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.383 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3633246 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ncabER4Zno 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ncabER4Zno 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3636802 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3636802 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3636802 ']' 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.641 18:54:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.899 [2024-07-14 18:54:57.877429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:09.899 [2024-07-14 18:54:57.877516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.899 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.899 [2024-07-14 18:54:57.943790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.899 [2024-07-14 18:54:58.027530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.899 [2024-07-14 18:54:58.027582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.899 [2024-07-14 18:54:58.027601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.899 [2024-07-14 18:54:58.027617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.899 [2024-07-14 18:54:58.027632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.899 [2024-07-14 18:54:58.027665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.156 18:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.156 18:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.156 18:54:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.156 18:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.156 18:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.157 18:54:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.157 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:10.157 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ncabER4Zno 00:23:10.157 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.415 [2024-07-14 18:54:58.441251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.415 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.672 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.930 [2024-07-14 18:54:58.958628] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.930 [2024-07-14 18:54:58.958953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.930 18:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.188 malloc0 00:23:11.188 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.445 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:11.704 [2024-07-14 18:54:59.756755] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ncabER4Zno 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ncabER4Zno' 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3637076 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3637076 /var/tmp/bdevperf.sock 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3637076 ']' 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.704 18:54:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.704 [2024-07-14 18:54:59.821723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:11.704 [2024-07-14 18:54:59.821799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637076 ] 00:23:11.704 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.704 [2024-07-14 18:54:59.882445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.961 [2024-07-14 18:54:59.968800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.961 18:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.961 18:55:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:11.961 18:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:12.218 [2024-07-14 18:55:00.320652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.218 [2024-07-14 18:55:00.320758] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:12.218 TLSTESTn1 00:23:12.218 18:55:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.475 Running I/O for 10 seconds... 00:23:22.436 00:23:22.436 Latency(us) 00:23:22.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.436 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.436 Verification LBA range: start 0x0 length 0x2000 00:23:22.436 TLSTESTn1 : 10.02 3532.58 13.80 0.00 0.00 36170.77 6893.42 47962.64 00:23:22.436 =================================================================================================================== 00:23:22.436 Total : 3532.58 13.80 0.00 0.00 36170.77 6893.42 47962.64 00:23:22.436 0 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3637076 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3637076 ']' 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3637076 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3637076 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3637076' 00:23:22.436 killing process with pid 3637076 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3637076 00:23:22.436 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.436 00:23:22.436 Latency(us) 00:23:22.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.436 =================================================================================================================== 00:23:22.436 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.436 [2024-07-14 18:55:10.618535] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:22.436 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3637076 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ncabER4Zno 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ncabER4Zno 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ncabER4Zno 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ncabER4Zno 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ncabER4Zno' 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3638389 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3638389 /var/tmp/bdevperf.sock 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3638389 ']' 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.694 18:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.694 [2024-07-14 18:55:10.895413] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:22.694 [2024-07-14 18:55:10.895489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638389 ] 00:23:22.952 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.952 [2024-07-14 18:55:10.954295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.952 [2024-07-14 18:55:11.039245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.952 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.952 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:22.952 18:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:23.210 [2024-07-14 18:55:11.400957] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.210 [2024-07-14 18:55:11.401029] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:23.210 [2024-07-14 18:55:11.401043] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ncabER4Zno 00:23:23.210 request: 00:23:23.210 { 00:23:23.210 "name": "TLSTEST", 00:23:23.210 "trtype": "tcp", 00:23:23.210 "traddr": "10.0.0.2", 00:23:23.210 "adrfam": "ipv4", 00:23:23.210 "trsvcid": "4420", 00:23:23.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.210 "prchk_reftag": false, 00:23:23.210 "prchk_guard": false, 00:23:23.210 "hdgst": false, 00:23:23.210 "ddgst": false, 00:23:23.210 "psk": "/tmp/tmp.ncabER4Zno", 00:23:23.210 "method": "bdev_nvme_attach_controller", 00:23:23.210 "req_id": 1 00:23:23.210 } 00:23:23.210 Got JSON-RPC error response 00:23:23.210 response: 00:23:23.210 { 00:23:23.210 "code": -1, 00:23:23.210 "message": "Operation not permitted" 00:23:23.210 } 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3638389 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3638389 ']' 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3638389 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.210 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638389 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638389' 00:23:23.497 killing process with pid 3638389 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3638389 00:23:23.497 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.497 00:23:23.497 Latency(us) 00:23:23.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.497 =================================================================================================================== 00:23:23.497 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3638389 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3636802 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3636802 ']' 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3636802 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636802 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636802' 00:23:23.497 killing process with pid 3636802 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3636802 00:23:23.497 [2024-07-14 18:55:11.690106] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.497 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3636802 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3638532 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3638532 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3638532 ']' 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.756 18:55:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.014 [2024-07-14 18:55:11.988020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:24.014 [2024-07-14 18:55:11.988099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.014 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.014 [2024-07-14 18:55:12.057915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.014 [2024-07-14 18:55:12.145325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.014 [2024-07-14 18:55:12.145392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.014 [2024-07-14 18:55:12.145417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.014 [2024-07-14 18:55:12.145438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.014 [2024-07-14 18:55:12.145457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.014 [2024-07-14 18:55:12.145501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ncabER4Zno 00:23:24.272 18:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.530 [2024-07-14 18:55:12.547169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.530 18:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.788 18:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.045 [2024-07-14 18:55:13.040520] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.045 [2024-07-14 18:55:13.040809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.045 18:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.302 malloc0 00:23:25.302 18:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.560 18:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:25.560 [2024-07-14 18:55:13.773774] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:25.560 [2024-07-14 18:55:13.773813] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:25.560 [2024-07-14 18:55:13.773860] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:25.560 request: 00:23:25.560 { 00:23:25.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.560 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.560 "psk": "/tmp/tmp.ncabER4Zno", 00:23:25.560 "method": "nvmf_subsystem_add_host", 00:23:25.560 "req_id": 1 00:23:25.560 } 00:23:25.560 Got JSON-RPC error response 00:23:25.560 response: 00:23:25.560 { 00:23:25.560 "code": -32603, 00:23:25.560 "message": "Internal error" 00:23:25.560 } 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3638532 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3638532 ']' 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3638532 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638532 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638532' 00:23:25.818 killing process with pid 3638532 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3638532 00:23:25.818 18:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3638532 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ncabER4Zno 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3638825 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3638825 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3638825 ']' 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.076 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.076 [2024-07-14 18:55:14.134001] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:26.076 [2024-07-14 18:55:14.134088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.076 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.076 [2024-07-14 18:55:14.199709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.076 [2024-07-14 18:55:14.288084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.076 [2024-07-14 18:55:14.288147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.076 [2024-07-14 18:55:14.288172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.076 [2024-07-14 18:55:14.288192] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.076 [2024-07-14 18:55:14.288209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.076 [2024-07-14 18:55:14.288249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ncabER4Zno 00:23:26.334 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.592 [2024-07-14 18:55:14.652349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.593 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.851 18:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.109 [2024-07-14 18:55:15.157707] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.109 [2024-07-14 18:55:15.158011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.109 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.366 malloc0 00:23:27.366 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.623 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:27.881 [2024-07-14 18:55:15.887411] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3638991 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3638991 /var/tmp/bdevperf.sock 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3638991 ']' 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.881 18:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.881 [2024-07-14 18:55:15.946624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:27.881 [2024-07-14 18:55:15.946695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638991 ] 00:23:27.881 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.881 [2024-07-14 18:55:16.022028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.139 [2024-07-14 18:55:16.123703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.139 18:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.139 18:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.139 18:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:28.396 [2024-07-14 18:55:16.505465] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.396 [2024-07-14 18:55:16.505584] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.396 TLSTESTn1 00:23:28.396 18:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:28.963 18:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:28.963 "subsystems": [ 00:23:28.963 { 00:23:28.963 "subsystem": "keyring", 00:23:28.963 "config": [] 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "subsystem": "iobuf", 00:23:28.963 "config": [ 00:23:28.963 { 00:23:28.963 "method": "iobuf_set_options", 00:23:28.963 "params": { 00:23:28.963 "small_pool_count": 8192, 00:23:28.963 "large_pool_count": 1024, 00:23:28.963 "small_bufsize": 8192, 00:23:28.963 "large_bufsize": 135168 00:23:28.963 } 00:23:28.963 } 00:23:28.963 ] 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "subsystem": "sock", 00:23:28.963 "config": [ 00:23:28.963 { 00:23:28.963 "method": "sock_set_default_impl", 00:23:28.963 "params": { 00:23:28.963 "impl_name": "posix" 00:23:28.963 } 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "method": "sock_impl_set_options", 00:23:28.963 "params": { 00:23:28.963 "impl_name": "ssl", 00:23:28.963 "recv_buf_size": 4096, 00:23:28.963 "send_buf_size": 4096, 00:23:28.963 "enable_recv_pipe": true, 00:23:28.963 "enable_quickack": false, 00:23:28.963 "enable_placement_id": 0, 00:23:28.963 "enable_zerocopy_send_server": true, 00:23:28.963 "enable_zerocopy_send_client": false, 00:23:28.963 "zerocopy_threshold": 0, 00:23:28.963 "tls_version": 0, 00:23:28.963 "enable_ktls": false 00:23:28.963 } 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "method": "sock_impl_set_options", 00:23:28.963 "params": { 00:23:28.963 "impl_name": "posix", 00:23:28.963 "recv_buf_size": 2097152, 00:23:28.963 "send_buf_size": 2097152, 00:23:28.963 "enable_recv_pipe": true, 00:23:28.963 "enable_quickack": false, 00:23:28.963 "enable_placement_id": 0, 00:23:28.963 "enable_zerocopy_send_server": true, 00:23:28.963 "enable_zerocopy_send_client": false, 00:23:28.963 "zerocopy_threshold": 0, 00:23:28.963 "tls_version": 0, 00:23:28.963 "enable_ktls": false 00:23:28.963 } 00:23:28.963 } 00:23:28.963 ] 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "subsystem": "vmd", 00:23:28.963 "config": [] 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "subsystem": "accel", 00:23:28.963 "config": [ 00:23:28.963 { 00:23:28.963 "method": "accel_set_options", 00:23:28.963 "params": { 00:23:28.963 "small_cache_size": 128, 00:23:28.963 "large_cache_size": 16, 00:23:28.963 "task_count": 2048, 00:23:28.963 "sequence_count": 2048, 00:23:28.963 "buf_count": 2048 00:23:28.963 } 00:23:28.963 } 00:23:28.963 ] 00:23:28.963 }, 00:23:28.963 { 00:23:28.963 "subsystem": "bdev", 00:23:28.963 "config": [ 00:23:28.963 { 00:23:28.963 "method": "bdev_set_options", 00:23:28.963 "params": { 00:23:28.964 "bdev_io_pool_size": 65535, 00:23:28.964 "bdev_io_cache_size": 256, 00:23:28.964 "bdev_auto_examine": true, 00:23:28.964 "iobuf_small_cache_size": 128, 00:23:28.964 "iobuf_large_cache_size": 16 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_raid_set_options", 00:23:28.964 "params": { 00:23:28.964 "process_window_size_kb": 1024 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_iscsi_set_options", 00:23:28.964 "params": { 00:23:28.964 "timeout_sec": 30 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_nvme_set_options", 00:23:28.964 "params": { 00:23:28.964 "action_on_timeout": "none", 00:23:28.964 "timeout_us": 0, 00:23:28.964 "timeout_admin_us": 0, 00:23:28.964 "keep_alive_timeout_ms": 10000, 00:23:28.964 "arbitration_burst": 0, 00:23:28.964 "low_priority_weight": 0, 00:23:28.964 "medium_priority_weight": 0, 00:23:28.964 "high_priority_weight": 0, 00:23:28.964 "nvme_adminq_poll_period_us": 10000, 00:23:28.964 "nvme_ioq_poll_period_us": 0, 00:23:28.964 "io_queue_requests": 0, 00:23:28.964 "delay_cmd_submit": true, 00:23:28.964 "transport_retry_count": 4, 00:23:28.964 "bdev_retry_count": 3, 00:23:28.964 "transport_ack_timeout": 0, 00:23:28.964 "ctrlr_loss_timeout_sec": 0, 00:23:28.964 "reconnect_delay_sec": 0, 00:23:28.964 "fast_io_fail_timeout_sec": 0, 00:23:28.964 "disable_auto_failback": false, 00:23:28.964 "generate_uuids": false, 00:23:28.964 "transport_tos": 0, 00:23:28.964 "nvme_error_stat": false, 00:23:28.964 "rdma_srq_size": 0, 00:23:28.964 "io_path_stat": false, 00:23:28.964 "allow_accel_sequence": false, 00:23:28.964 "rdma_max_cq_size": 0, 00:23:28.964 "rdma_cm_event_timeout_ms": 0, 00:23:28.964 "dhchap_digests": [ 00:23:28.964 "sha256", 00:23:28.964 "sha384", 00:23:28.964 "sha512" 00:23:28.964 ], 00:23:28.964 "dhchap_dhgroups": [ 00:23:28.964 "null", 00:23:28.964 "ffdhe2048", 00:23:28.964 "ffdhe3072", 00:23:28.964 "ffdhe4096", 00:23:28.964 "ffdhe6144", 00:23:28.964 "ffdhe8192" 00:23:28.964 ] 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_nvme_set_hotplug", 00:23:28.964 "params": { 00:23:28.964 "period_us": 100000, 00:23:28.964 "enable": false 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_malloc_create", 00:23:28.964 "params": { 00:23:28.964 "name": "malloc0", 00:23:28.964 "num_blocks": 8192, 00:23:28.964 "block_size": 4096, 00:23:28.964 "physical_block_size": 4096, 00:23:28.964 "uuid": "2936873c-c2b1-4d85-b108-364df542cb6d", 00:23:28.964 "optimal_io_boundary": 0 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "bdev_wait_for_examine" 00:23:28.964 } 00:23:28.964 ] 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "subsystem": "nbd", 00:23:28.964 "config": [] 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "subsystem": "scheduler", 00:23:28.964 "config": [ 00:23:28.964 { 00:23:28.964 "method": "framework_set_scheduler", 00:23:28.964 "params": { 00:23:28.964 "name": "static" 00:23:28.964 } 00:23:28.964 } 00:23:28.964 ] 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "subsystem": "nvmf", 00:23:28.964 "config": [ 00:23:28.964 { 00:23:28.964 "method": "nvmf_set_config", 00:23:28.964 "params": { 00:23:28.964 "discovery_filter": "match_any", 00:23:28.964 "admin_cmd_passthru": { 00:23:28.964 "identify_ctrlr": false 00:23:28.964 } 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_set_max_subsystems", 00:23:28.964 "params": { 00:23:28.964 "max_subsystems": 1024 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_set_crdt", 00:23:28.964 "params": { 00:23:28.964 "crdt1": 0, 00:23:28.964 "crdt2": 0, 00:23:28.964 "crdt3": 0 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_create_transport", 00:23:28.964 "params": { 00:23:28.964 "trtype": "TCP", 00:23:28.964 "max_queue_depth": 128, 00:23:28.964 "max_io_qpairs_per_ctrlr": 127, 00:23:28.964 "in_capsule_data_size": 4096, 00:23:28.964 "max_io_size": 131072, 00:23:28.964 "io_unit_size": 131072, 00:23:28.964 "max_aq_depth": 128, 00:23:28.964 "num_shared_buffers": 511, 00:23:28.964 "buf_cache_size": 4294967295, 00:23:28.964 "dif_insert_or_strip": false, 00:23:28.964 "zcopy": false, 00:23:28.964 "c2h_success": false, 00:23:28.964 "sock_priority": 0, 00:23:28.964 "abort_timeout_sec": 1, 00:23:28.964 "ack_timeout": 0, 00:23:28.964 "data_wr_pool_size": 0 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_create_subsystem", 00:23:28.964 "params": { 00:23:28.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.964 "allow_any_host": false, 00:23:28.964 "serial_number": "SPDK00000000000001", 00:23:28.964 "model_number": "SPDK bdev Controller", 00:23:28.964 "max_namespaces": 10, 00:23:28.964 "min_cntlid": 1, 00:23:28.964 "max_cntlid": 65519, 00:23:28.964 "ana_reporting": false 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_subsystem_add_host", 00:23:28.964 "params": { 00:23:28.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.964 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.964 "psk": "/tmp/tmp.ncabER4Zno" 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_subsystem_add_ns", 00:23:28.964 "params": { 00:23:28.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.964 "namespace": { 00:23:28.964 "nsid": 1, 00:23:28.964 "bdev_name": "malloc0", 00:23:28.964 "nguid": "2936873CC2B14D85B108364DF542CB6D", 00:23:28.964 "uuid": "2936873c-c2b1-4d85-b108-364df542cb6d", 00:23:28.964 "no_auto_visible": false 00:23:28.964 } 00:23:28.964 } 00:23:28.964 }, 00:23:28.964 { 00:23:28.964 "method": "nvmf_subsystem_add_listener", 00:23:28.964 "params": { 00:23:28.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.964 "listen_address": { 00:23:28.964 "trtype": "TCP", 00:23:28.964 "adrfam": "IPv4", 00:23:28.964 "traddr": "10.0.0.2", 00:23:28.964 "trsvcid": "4420" 00:23:28.964 }, 00:23:28.964 "secure_channel": true 00:23:28.964 } 00:23:28.964 } 00:23:28.964 ] 00:23:28.964 } 00:23:28.964 ] 00:23:28.964 }' 00:23:28.964 18:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:29.223 18:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:29.223 "subsystems": [ 00:23:29.223 { 00:23:29.223 "subsystem": "keyring", 00:23:29.223 "config": [] 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "subsystem": "iobuf", 00:23:29.223 "config": [ 00:23:29.223 { 00:23:29.223 "method": "iobuf_set_options", 00:23:29.223 "params": { 00:23:29.223 "small_pool_count": 8192, 00:23:29.223 "large_pool_count": 1024, 00:23:29.223 "small_bufsize": 8192, 00:23:29.223 "large_bufsize": 135168 00:23:29.223 } 00:23:29.223 } 00:23:29.223 ] 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "subsystem": "sock", 00:23:29.223 "config": [ 00:23:29.223 { 00:23:29.223 "method": "sock_set_default_impl", 00:23:29.223 "params": { 00:23:29.223 "impl_name": "posix" 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "method": "sock_impl_set_options", 00:23:29.223 "params": { 00:23:29.223 "impl_name": "ssl", 00:23:29.223 "recv_buf_size": 4096, 00:23:29.223 "send_buf_size": 4096, 00:23:29.223 "enable_recv_pipe": true, 00:23:29.223 "enable_quickack": false, 00:23:29.223 "enable_placement_id": 0, 00:23:29.223 "enable_zerocopy_send_server": true, 00:23:29.223 "enable_zerocopy_send_client": false, 00:23:29.223 "zerocopy_threshold": 0, 00:23:29.223 "tls_version": 0, 00:23:29.223 "enable_ktls": false 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "method": "sock_impl_set_options", 00:23:29.223 "params": { 00:23:29.223 "impl_name": "posix", 00:23:29.223 "recv_buf_size": 2097152, 00:23:29.223 "send_buf_size": 2097152, 00:23:29.223 "enable_recv_pipe": true, 00:23:29.223 "enable_quickack": false, 00:23:29.223 "enable_placement_id": 0, 00:23:29.223 "enable_zerocopy_send_server": true, 00:23:29.223 "enable_zerocopy_send_client": false, 00:23:29.223 "zerocopy_threshold": 0, 00:23:29.223 "tls_version": 0, 00:23:29.223 "enable_ktls": false 00:23:29.223 } 00:23:29.223 } 00:23:29.223 ] 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "subsystem": "vmd", 00:23:29.223 "config": [] 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "subsystem": "accel", 00:23:29.223 "config": [ 00:23:29.223 { 00:23:29.223 "method": "accel_set_options", 00:23:29.223 "params": { 00:23:29.223 "small_cache_size": 128, 00:23:29.223 "large_cache_size": 16, 00:23:29.223 "task_count": 2048, 00:23:29.223 "sequence_count": 2048, 00:23:29.223 "buf_count": 2048 00:23:29.223 } 00:23:29.223 } 00:23:29.223 ] 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "subsystem": "bdev", 00:23:29.223 "config": [ 00:23:29.223 { 00:23:29.223 "method": "bdev_set_options", 00:23:29.223 "params": { 00:23:29.223 "bdev_io_pool_size": 65535, 00:23:29.223 "bdev_io_cache_size": 256, 00:23:29.223 "bdev_auto_examine": true, 00:23:29.223 "iobuf_small_cache_size": 128, 00:23:29.223 "iobuf_large_cache_size": 16 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "method": "bdev_raid_set_options", 00:23:29.223 "params": { 00:23:29.223 "process_window_size_kb": 1024 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "method": "bdev_iscsi_set_options", 00:23:29.223 "params": { 00:23:29.223 "timeout_sec": 30 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.223 "method": "bdev_nvme_set_options", 00:23:29.223 "params": { 00:23:29.223 "action_on_timeout": "none", 00:23:29.223 "timeout_us": 0, 00:23:29.223 "timeout_admin_us": 0, 00:23:29.223 "keep_alive_timeout_ms": 10000, 00:23:29.223 "arbitration_burst": 0, 00:23:29.223 "low_priority_weight": 0, 00:23:29.223 "medium_priority_weight": 0, 00:23:29.223 "high_priority_weight": 0, 00:23:29.223 "nvme_adminq_poll_period_us": 10000, 00:23:29.223 "nvme_ioq_poll_period_us": 0, 00:23:29.223 "io_queue_requests": 512, 00:23:29.223 "delay_cmd_submit": true, 00:23:29.223 "transport_retry_count": 4, 00:23:29.223 "bdev_retry_count": 3, 00:23:29.223 "transport_ack_timeout": 0, 00:23:29.223 "ctrlr_loss_timeout_sec": 0, 00:23:29.223 "reconnect_delay_sec": 0, 00:23:29.223 "fast_io_fail_timeout_sec": 0, 00:23:29.223 "disable_auto_failback": false, 00:23:29.223 "generate_uuids": false, 00:23:29.223 "transport_tos": 0, 00:23:29.223 "nvme_error_stat": false, 00:23:29.223 "rdma_srq_size": 0, 00:23:29.223 "io_path_stat": false, 00:23:29.223 "allow_accel_sequence": false, 00:23:29.223 "rdma_max_cq_size": 0, 00:23:29.223 "rdma_cm_event_timeout_ms": 0, 00:23:29.223 "dhchap_digests": [ 00:23:29.223 "sha256", 00:23:29.223 "sha384", 00:23:29.223 "sha512" 00:23:29.223 ], 00:23:29.223 "dhchap_dhgroups": [ 00:23:29.223 "null", 00:23:29.223 "ffdhe2048", 00:23:29.223 "ffdhe3072", 00:23:29.223 "ffdhe4096", 00:23:29.223 "ffdhe6144", 00:23:29.223 "ffdhe8192" 00:23:29.223 ] 00:23:29.223 } 00:23:29.223 }, 00:23:29.223 { 00:23:29.224 "method": "bdev_nvme_attach_controller", 00:23:29.224 "params": { 00:23:29.224 "name": "TLSTEST", 00:23:29.224 "trtype": "TCP", 00:23:29.224 "adrfam": "IPv4", 00:23:29.224 "traddr": "10.0.0.2", 00:23:29.224 "trsvcid": "4420", 00:23:29.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.224 "prchk_reftag": false, 00:23:29.224 "prchk_guard": false, 00:23:29.224 "ctrlr_loss_timeout_sec": 0, 00:23:29.224 "reconnect_delay_sec": 0, 00:23:29.224 "fast_io_fail_timeout_sec": 0, 00:23:29.224 "psk": "/tmp/tmp.ncabER4Zno", 00:23:29.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.224 "hdgst": false, 00:23:29.224 "ddgst": false 00:23:29.224 } 00:23:29.224 }, 00:23:29.224 { 00:23:29.224 "method": "bdev_nvme_set_hotplug", 00:23:29.224 "params": { 00:23:29.224 "period_us": 100000, 00:23:29.224 "enable": false 00:23:29.224 } 00:23:29.224 }, 00:23:29.224 { 00:23:29.224 "method": "bdev_wait_for_examine" 00:23:29.224 } 00:23:29.224 ] 00:23:29.224 }, 00:23:29.224 { 00:23:29.224 "subsystem": "nbd", 00:23:29.224 "config": [] 00:23:29.224 } 00:23:29.224 ] 00:23:29.224 }' 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3638991 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3638991 ']' 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3638991 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638991 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638991' 00:23:29.224 killing process with pid 3638991 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3638991 00:23:29.224 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.224 00:23:29.224 Latency(us) 00:23:29.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.224 =================================================================================================================== 00:23:29.224 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.224 [2024-07-14 18:55:17.239940] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.224 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3638991 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3638825 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3638825 ']' 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3638825 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638825 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638825' 00:23:29.482 killing process with pid 3638825 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3638825 00:23:29.482 [2024-07-14 18:55:17.491298] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.482 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3638825 00:23:29.740 18:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:29.740 18:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.740 18:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:29.740 "subsystems": [ 00:23:29.740 { 00:23:29.740 "subsystem": "keyring", 00:23:29.740 "config": [] 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "subsystem": "iobuf", 00:23:29.740 "config": [ 00:23:29.740 { 00:23:29.740 "method": "iobuf_set_options", 00:23:29.740 "params": { 00:23:29.740 "small_pool_count": 8192, 00:23:29.740 "large_pool_count": 1024, 00:23:29.740 "small_bufsize": 8192, 00:23:29.740 "large_bufsize": 135168 00:23:29.740 } 00:23:29.740 } 00:23:29.740 ] 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "subsystem": "sock", 00:23:29.740 "config": [ 00:23:29.740 { 00:23:29.740 "method": "sock_set_default_impl", 00:23:29.740 "params": { 00:23:29.740 "impl_name": "posix" 00:23:29.740 } 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "method": "sock_impl_set_options", 00:23:29.740 "params": { 00:23:29.740 "impl_name": "ssl", 00:23:29.740 "recv_buf_size": 4096, 00:23:29.740 "send_buf_size": 4096, 00:23:29.740 "enable_recv_pipe": true, 00:23:29.740 "enable_quickack": false, 00:23:29.740 "enable_placement_id": 0, 00:23:29.740 "enable_zerocopy_send_server": true, 00:23:29.740 "enable_zerocopy_send_client": false, 00:23:29.740 "zerocopy_threshold": 0, 00:23:29.740 "tls_version": 0, 00:23:29.740 "enable_ktls": false 00:23:29.740 } 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "method": "sock_impl_set_options", 00:23:29.740 "params": { 00:23:29.740 "impl_name": "posix", 00:23:29.740 "recv_buf_size": 2097152, 00:23:29.740 "send_buf_size": 2097152, 00:23:29.740 "enable_recv_pipe": true, 00:23:29.740 "enable_quickack": false, 00:23:29.740 "enable_placement_id": 0, 00:23:29.740 "enable_zerocopy_send_server": true, 00:23:29.740 "enable_zerocopy_send_client": false, 00:23:29.740 "zerocopy_threshold": 0, 00:23:29.740 "tls_version": 0, 00:23:29.740 "enable_ktls": false 00:23:29.740 } 00:23:29.740 } 00:23:29.740 ] 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "subsystem": "vmd", 00:23:29.740 "config": [] 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "subsystem": "accel", 00:23:29.740 "config": [ 00:23:29.740 { 00:23:29.740 "method": "accel_set_options", 00:23:29.740 "params": { 00:23:29.740 "small_cache_size": 128, 00:23:29.740 "large_cache_size": 16, 00:23:29.740 "task_count": 2048, 00:23:29.740 "sequence_count": 2048, 00:23:29.740 "buf_count": 2048 00:23:29.740 } 00:23:29.740 } 00:23:29.740 ] 00:23:29.740 }, 00:23:29.740 { 00:23:29.740 "subsystem": "bdev", 00:23:29.740 "config": [ 00:23:29.740 { 00:23:29.740 "method": "bdev_set_options", 00:23:29.740 "params": { 00:23:29.740 "bdev_io_pool_size": 65535, 00:23:29.740 "bdev_io_cache_size": 256, 00:23:29.740 "bdev_auto_examine": true, 00:23:29.740 "iobuf_small_cache_size": 128, 00:23:29.740 "iobuf_large_cache_size": 16 00:23:29.740 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_raid_set_options", 00:23:29.741 "params": { 00:23:29.741 "process_window_size_kb": 1024 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_iscsi_set_options", 00:23:29.741 "params": { 00:23:29.741 "timeout_sec": 30 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_nvme_set_options", 00:23:29.741 "params": { 00:23:29.741 "action_on_timeout": "none", 00:23:29.741 "timeout_us": 0, 00:23:29.741 "timeout_admin_us": 0, 00:23:29.741 "keep_alive_timeout_ms": 10000, 00:23:29.741 "arbitration_burst": 0, 00:23:29.741 "low_priority_weight": 0, 00:23:29.741 "medium_priority_weight": 0, 00:23:29.741 "high_priority_weight": 0, 00:23:29.741 "nvme_adminq_poll_period_us": 10000, 00:23:29.741 "nvme_ioq_poll_period_us": 0, 00:23:29.741 "io_queue_requests": 0, 00:23:29.741 "delay_cmd_submit": true, 00:23:29.741 "transport_retry_count": 4, 00:23:29.741 "bdev_retry_count": 3, 00:23:29.741 "transport_ack_timeout": 0, 00:23:29.741 "ctrlr_loss_timeout_sec": 0, 00:23:29.741 "reconnect_delay_sec": 0, 00:23:29.741 "fast_io_fail_timeout_sec": 0, 00:23:29.741 "disable_auto_failback": false, 00:23:29.741 "generate_uuids": false, 00:23:29.741 "transport_tos": 0, 00:23:29.741 "nvme_error_stat": false, 00:23:29.741 "rdma_srq_size": 0, 00:23:29.741 "io_path_stat": false, 00:23:29.741 "allow_accel_sequence": false, 00:23:29.741 "rdma_max_cq_size": 0, 00:23:29.741 "rdma_cm_event_timeout_ms": 0, 00:23:29.741 "dhchap_digests": [ 00:23:29.741 "sha256", 00:23:29.741 "sha384", 00:23:29.741 "sha512" 00:23:29.741 ], 00:23:29.741 "dhchap_dhgroups": [ 00:23:29.741 "null", 00:23:29.741 "ffdhe2048", 00:23:29.741 "ffdhe3072", 00:23:29.741 "ffdhe4096", 00:23:29.741 "ffdhe6144", 00:23:29.741 "ffdhe8192" 00:23:29.741 ] 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_nvme_set_hotplug", 00:23:29.741 "params": { 00:23:29.741 "period_us": 100000, 00:23:29.741 "enable": false 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_malloc_create", 00:23:29.741 "params": { 00:23:29.741 "name": "malloc0", 00:23:29.741 "num_blocks": 8192, 00:23:29.741 "block_size": 4096, 00:23:29.741 "physical_block_size": 4096, 00:23:29.741 "uuid": "2936873c-c2b1-4d85-b108-364df542cb6d", 00:23:29.741 "optimal_io_boundary": 0 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "bdev_wait_for_examine" 00:23:29.741 } 00:23:29.741 ] 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "subsystem": "nbd", 00:23:29.741 "config": [] 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "subsystem": "scheduler", 00:23:29.741 "config": [ 00:23:29.741 { 00:23:29.741 "method": "framework_set_scheduler", 00:23:29.741 "params": { 00:23:29.741 "name": "static" 00:23:29.741 } 00:23:29.741 } 00:23:29.741 ] 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "subsystem": "nvmf", 00:23:29.741 "config": [ 00:23:29.741 { 00:23:29.741 "method": "nvmf_set_config", 00:23:29.741 "params": { 00:23:29.741 "discovery_filter": "match_any", 00:23:29.741 "admin_cmd_passthru": { 00:23:29.741 "identify_ctrlr": false 00:23:29.741 } 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_set_max_subsystems", 00:23:29.741 "params": { 00:23:29.741 "max_subsystems": 1024 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_set_crdt", 00:23:29.741 "params": { 00:23:29.741 "crdt1": 0, 00:23:29.741 "crdt2": 0, 00:23:29.741 "crdt3": 0 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_create_transport", 00:23:29.741 "params": { 00:23:29.741 "trtype": "TCP", 00:23:29.741 "max_queue_depth": 128, 00:23:29.741 "max_io_qpairs_per_ctrlr": 127, 00:23:29.741 "in_capsule_data_size": 4096, 00:23:29.741 "max_io_size": 131072, 00:23:29.741 "io_unit_size": 131072, 00:23:29.741 "max_aq_depth": 128, 00:23:29.741 "num_shared_buffers": 511, 00:23:29.741 "buf_cache_size": 4294967295, 00:23:29.741 "dif_insert_or_strip": false, 00:23:29.741 "zcopy": false, 00:23:29.741 "c2h_success": false, 00:23:29.741 "sock_priority": 0, 00:23:29.741 "abort_timeout_sec": 1, 00:23:29.741 "ack_timeout": 0, 00:23:29.741 "data_wr_pool_size": 0 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_create_subsystem", 00:23:29.741 "params": { 00:23:29.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.741 "allow_any_host": false, 00:23:29.741 "serial_number": "SPDK00000000000001", 00:23:29.741 "model_number": "SPDK bdev Controller", 00:23:29.741 "max_namespaces": 10, 00:23:29.741 "min_cntlid": 1, 00:23:29.741 "max_cntlid": 65519, 00:23:29.741 "ana_reporting": false 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_subsystem_add_host", 00:23:29.741 "params": { 00:23:29.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.741 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.741 "psk": "/tmp/tmp.ncabER4Zno" 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_subsystem_add_ns", 00:23:29.741 "params": { 00:23:29.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.741 "namespace": { 00:23:29.741 "nsid": 1, 00:23:29.741 "bdev_name": "malloc0", 00:23:29.741 "nguid": "2936873CC2B14D85B108364DF542CB6D", 00:23:29.741 "uuid": "2936873c-c2b1-4d85-b108-364df542cb6d", 00:23:29.741 "no_auto_visible": false 00:23:29.741 } 00:23:29.741 } 00:23:29.741 }, 00:23:29.741 { 00:23:29.741 "method": "nvmf_subsystem_add_listener", 00:23:29.741 "params": { 00:23:29.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.741 "listen_address": { 00:23:29.741 "trtype": "TCP", 00:23:29.741 "adrfam": "IPv4", 00:23:29.741 "traddr": "10.0.0.2", 00:23:29.741 "trsvcid": "4420" 00:23:29.741 }, 00:23:29.741 "secure_channel": true 00:23:29.741 } 00:23:29.741 } 00:23:29.741 ] 00:23:29.741 } 00:23:29.741 ] 00:23:29.741 }' 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3639265 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3639265 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3639265 ']' 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.741 18:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.741 [2024-07-14 18:55:17.792705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:29.741 [2024-07-14 18:55:17.792805] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.741 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.741 [2024-07-14 18:55:17.858377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.741 [2024-07-14 18:55:17.945484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.741 [2024-07-14 18:55:17.945546] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.741 [2024-07-14 18:55:17.945566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.741 [2024-07-14 18:55:17.945583] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.741 [2024-07-14 18:55:17.945598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.741 [2024-07-14 18:55:17.945700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.000 [2024-07-14 18:55:18.173902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.000 [2024-07-14 18:55:18.189851] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.000 [2024-07-14 18:55:18.205925] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.000 [2024-07-14 18:55:18.217057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3639417 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3639417 /var/tmp/bdevperf.sock 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3639417 ']' 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.932 18:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:30.932 "subsystems": [ 00:23:30.932 { 00:23:30.932 "subsystem": "keyring", 00:23:30.932 "config": [] 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "subsystem": "iobuf", 00:23:30.932 "config": [ 00:23:30.932 { 00:23:30.932 "method": "iobuf_set_options", 00:23:30.932 "params": { 00:23:30.932 "small_pool_count": 8192, 00:23:30.932 "large_pool_count": 1024, 00:23:30.932 "small_bufsize": 8192, 00:23:30.932 "large_bufsize": 135168 00:23:30.932 } 00:23:30.932 } 00:23:30.932 ] 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "subsystem": "sock", 00:23:30.932 "config": [ 00:23:30.932 { 00:23:30.932 "method": "sock_set_default_impl", 00:23:30.932 "params": { 00:23:30.932 "impl_name": "posix" 00:23:30.932 } 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "method": "sock_impl_set_options", 00:23:30.932 "params": { 00:23:30.932 "impl_name": "ssl", 00:23:30.932 "recv_buf_size": 4096, 00:23:30.932 "send_buf_size": 4096, 00:23:30.932 "enable_recv_pipe": true, 00:23:30.932 "enable_quickack": false, 00:23:30.932 "enable_placement_id": 0, 00:23:30.932 "enable_zerocopy_send_server": true, 00:23:30.932 "enable_zerocopy_send_client": false, 00:23:30.932 "zerocopy_threshold": 0, 00:23:30.932 "tls_version": 0, 00:23:30.932 "enable_ktls": false 00:23:30.932 } 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "method": "sock_impl_set_options", 00:23:30.932 "params": { 00:23:30.932 "impl_name": "posix", 00:23:30.932 "recv_buf_size": 2097152, 00:23:30.932 "send_buf_size": 2097152, 00:23:30.932 "enable_recv_pipe": true, 00:23:30.932 "enable_quickack": false, 00:23:30.932 "enable_placement_id": 0, 00:23:30.932 "enable_zerocopy_send_server": true, 00:23:30.932 "enable_zerocopy_send_client": false, 00:23:30.932 "zerocopy_threshold": 0, 00:23:30.932 "tls_version": 0, 00:23:30.932 "enable_ktls": false 00:23:30.932 } 00:23:30.932 } 00:23:30.932 ] 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "subsystem": "vmd", 00:23:30.932 "config": [] 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "subsystem": "accel", 00:23:30.932 "config": [ 00:23:30.932 { 00:23:30.932 "method": "accel_set_options", 00:23:30.932 "params": { 00:23:30.932 "small_cache_size": 128, 00:23:30.932 "large_cache_size": 16, 00:23:30.932 "task_count": 2048, 00:23:30.932 "sequence_count": 2048, 00:23:30.932 "buf_count": 2048 00:23:30.932 } 00:23:30.932 } 00:23:30.932 ] 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "subsystem": "bdev", 00:23:30.932 "config": [ 00:23:30.932 { 00:23:30.932 "method": "bdev_set_options", 00:23:30.932 "params": { 00:23:30.932 "bdev_io_pool_size": 65535, 00:23:30.932 "bdev_io_cache_size": 256, 00:23:30.932 "bdev_auto_examine": true, 00:23:30.932 "iobuf_small_cache_size": 128, 00:23:30.932 "iobuf_large_cache_size": 16 00:23:30.932 } 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "method": "bdev_raid_set_options", 00:23:30.932 "params": { 00:23:30.932 "process_window_size_kb": 1024 00:23:30.932 } 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "method": "bdev_iscsi_set_options", 00:23:30.932 "params": { 00:23:30.932 "timeout_sec": 30 00:23:30.932 } 00:23:30.932 }, 00:23:30.932 { 00:23:30.932 "method": "bdev_nvme_set_options", 00:23:30.932 "params": { 00:23:30.932 "action_on_timeout": "none", 00:23:30.932 "timeout_us": 0, 00:23:30.932 "timeout_admin_us": 0, 00:23:30.932 "keep_alive_timeout_ms": 10000, 00:23:30.932 "arbitration_burst": 0, 00:23:30.932 "low_priority_weight": 0, 00:23:30.932 "medium_priority_weight": 0, 00:23:30.932 "high_priority_weight": 0, 00:23:30.932 "nvme_adminq_poll_period_us": 10000, 00:23:30.932 "nvme_ioq_poll_period_us": 0, 00:23:30.932 "io_queue_requests": 512, 00:23:30.932 "delay_cmd_submit": true, 00:23:30.932 "transport_retry_count": 4, 00:23:30.932 "bdev_retry_count": 3, 00:23:30.933 "transport_ack_timeout": 0, 00:23:30.933 "ctrlr_loss_timeout_sec": 0, 00:23:30.933 "reconnect_delay_sec": 0, 00:23:30.933 "fast_io_fail_timeout_sec": 0, 00:23:30.933 "disable_auto_failback": false, 00:23:30.933 "generate_uuids": false, 00:23:30.933 "transport_tos": 0, 00:23:30.933 "nvme_error_stat": false, 00:23:30.933 "rdma_srq_size": 0, 00:23:30.933 "io_path_stat": false, 00:23:30.933 "allow_accel_sequence": false, 00:23:30.933 "rdma_max_cq_size": 0, 00:23:30.933 "rdma_cm_event_timeout_ms": 0, 00:23:30.933 "dhchap_digests": [ 00:23:30.933 "sha256", 00:23:30.933 "sha384", 00:23:30.933 "sha512" 00:23:30.933 ], 00:23:30.933 "dhchap_dhgroups": [ 00:23:30.933 "null", 00:23:30.933 "ffdhe2048", 00:23:30.933 "ffdhe3072", 00:23:30.933 "ffdhe4096", 00:23:30.933 "ffd 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.933 he6144", 00:23:30.933 "ffdhe8192" 00:23:30.933 ] 00:23:30.933 } 00:23:30.933 }, 00:23:30.933 { 00:23:30.933 "method": "bdev_nvme_attach_controller", 00:23:30.933 "params": { 00:23:30.933 "name": "TLSTEST", 00:23:30.933 "trtype": "TCP", 00:23:30.933 "adrfam": "IPv4", 00:23:30.933 "traddr": "10.0.0.2", 00:23:30.933 "trsvcid": "4420", 00:23:30.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.933 "prchk_reftag": false, 00:23:30.933 "prchk_guard": false, 00:23:30.933 "ctrlr_loss_timeout_sec": 0, 00:23:30.933 "reconnect_delay_sec": 0, 00:23:30.933 "fast_io_fail_timeout_sec": 0, 00:23:30.933 "psk": "/tmp/tmp.ncabER4Zno", 00:23:30.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.933 "hdgst": false, 00:23:30.933 "ddgst": false 00:23:30.933 } 00:23:30.933 }, 00:23:30.933 { 00:23:30.933 "method": "bdev_nvme_set_hotplug", 00:23:30.933 "params": { 00:23:30.933 "period_us": 100000, 00:23:30.933 "enable": false 00:23:30.933 } 00:23:30.933 }, 00:23:30.933 { 00:23:30.933 "method": "bdev_wait_for_examine" 00:23:30.933 } 00:23:30.933 ] 00:23:30.933 }, 00:23:30.933 { 00:23:30.933 "subsystem": "nbd", 00:23:30.933 "config": [] 00:23:30.933 } 00:23:30.933 ] 00:23:30.933 }' 00:23:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.933 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.933 18:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.933 [2024-07-14 18:55:18.861544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:30.933 [2024-07-14 18:55:18.861621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639417 ] 00:23:30.933 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.933 [2024-07-14 18:55:18.918882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.933 [2024-07-14 18:55:19.001640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.191 [2024-07-14 18:55:19.170194] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.191 [2024-07-14 18:55:19.170307] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.755 18:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.755 18:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:31.755 18:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:31.755 Running I/O for 10 seconds... 00:23:43.946 00:23:43.946 Latency(us) 00:23:43.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.946 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.946 Verification LBA range: start 0x0 length 0x2000 00:23:43.946 TLSTESTn1 : 10.02 3502.51 13.68 0.00 0.00 36483.92 6747.78 35729.26 00:23:43.946 =================================================================================================================== 00:23:43.946 Total : 3502.51 13.68 0.00 0.00 36483.92 6747.78 35729.26 00:23:43.946 0 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3639417 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3639417 ']' 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3639417 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639417 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639417' 00:23:43.946 killing process with pid 3639417 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3639417 00:23:43.946 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.946 00:23:43.946 Latency(us) 00:23:43.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.946 =================================================================================================================== 00:23:43.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.946 [2024-07-14 18:55:29.990246] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:43.946 18:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3639417 00:23:43.946 18:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3639265 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3639265 ']' 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3639265 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639265 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639265' 00:23:43.947 killing process with pid 3639265 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3639265 00:23:43.947 [2024-07-14 18:55:30.244244] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3639265 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3640738 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3640738 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3640738 ']' 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.947 [2024-07-14 18:55:30.556309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:43.947 [2024-07-14 18:55:30.556394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.947 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.947 [2024-07-14 18:55:30.624562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.947 [2024-07-14 18:55:30.712842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.947 [2024-07-14 18:55:30.712909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.947 [2024-07-14 18:55:30.712941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.947 [2024-07-14 18:55:30.712953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.947 [2024-07-14 18:55:30.712964] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.947 [2024-07-14 18:55:30.712997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ncabER4Zno 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ncabER4Zno 00:23:43.947 18:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.947 [2024-07-14 18:55:31.085228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.947 18:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.947 18:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.947 [2024-07-14 18:55:31.562454] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.947 [2024-07-14 18:55:31.562695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.947 18:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:43.947 malloc0 00:23:43.947 18:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.204 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ncabER4Zno 00:23:44.462 [2024-07-14 18:55:32.444903] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3641022 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3641022 /var/tmp/bdevperf.sock 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3641022 ']' 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.462 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.462 [2024-07-14 18:55:32.509360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:44.462 [2024-07-14 18:55:32.509430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641022 ] 00:23:44.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.462 [2024-07-14 18:55:32.571764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.462 [2024-07-14 18:55:32.661708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.719 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.719 18:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.719 18:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ncabER4Zno 00:23:44.977 18:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:45.233 [2024-07-14 18:55:33.274083] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.233 nvme0n1 00:23:45.233 18:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.233 Running I/O for 1 seconds... 00:23:46.603 00:23:46.603 Latency(us) 00:23:46.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.603 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:46.603 Verification LBA range: start 0x0 length 0x2000 00:23:46.603 nvme0n1 : 1.02 2392.37 9.35 0.00 0.00 52997.41 11116.85 53593.88 00:23:46.603 =================================================================================================================== 00:23:46.603 Total : 2392.37 9.35 0.00 0.00 52997.41 11116.85 53593.88 00:23:46.603 0 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3641022 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3641022 ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3641022 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641022 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641022' 00:23:46.603 killing process with pid 3641022 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3641022 00:23:46.603 Received shutdown signal, test time was about 1.000000 seconds 00:23:46.603 00:23:46.603 Latency(us) 00:23:46.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.603 =================================================================================================================== 00:23:46.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3641022 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3640738 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3640738 ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3640738 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640738 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640738' 00:23:46.603 killing process with pid 3640738 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3640738 00:23:46.603 [2024-07-14 18:55:34.784447] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:46.603 18:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3640738 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3641307 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3641307 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3641307 ']' 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.862 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.863 [2024-07-14 18:55:35.076958] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:46.863 [2024-07-14 18:55:35.077039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.121 [2024-07-14 18:55:35.139151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.121 [2024-07-14 18:55:35.221620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.121 [2024-07-14 18:55:35.221687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.121 [2024-07-14 18:55:35.221715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.121 [2024-07-14 18:55:35.221727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.121 [2024-07-14 18:55:35.221736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.121 [2024-07-14 18:55:35.221769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.121 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.121 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.121 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.121 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.121 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.379 18:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.379 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:47.379 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.380 [2024-07-14 18:55:35.361773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.380 malloc0 00:23:47.380 [2024-07-14 18:55:35.393382] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.380 [2024-07-14 18:55:35.393638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3641386 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3641386 /var/tmp/bdevperf.sock 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3641386 ']' 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.380 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.380 [2024-07-14 18:55:35.466618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:47.380 [2024-07-14 18:55:35.466696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641386 ] 00:23:47.380 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.380 [2024-07-14 18:55:35.528125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.638 [2024-07-14 18:55:35.619548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.638 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.638 18:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.638 18:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ncabER4Zno 00:23:47.896 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:48.154 [2024-07-14 18:55:36.297482] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.154 nvme0n1 00:23:48.410 18:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.410 Running I/O for 1 seconds... 00:23:49.375 00:23:49.375 Latency(us) 00:23:49.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.375 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.375 Verification LBA range: start 0x0 length 0x2000 00:23:49.375 nvme0n1 : 1.02 3197.18 12.49 0.00 0.00 39601.30 6456.51 37088.52 00:23:49.375 =================================================================================================================== 00:23:49.375 Total : 3197.18 12.49 0.00 0.00 39601.30 6456.51 37088.52 00:23:49.375 0 00:23:49.375 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:49.375 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.375 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.633 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.633 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:49.633 "subsystems": [ 00:23:49.633 { 00:23:49.633 "subsystem": "keyring", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "keyring_file_add_key", 00:23:49.633 "params": { 00:23:49.633 "name": "key0", 00:23:49.633 "path": "/tmp/tmp.ncabER4Zno" 00:23:49.633 } 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "iobuf", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "iobuf_set_options", 00:23:49.633 "params": { 00:23:49.633 "small_pool_count": 8192, 00:23:49.633 "large_pool_count": 1024, 00:23:49.633 "small_bufsize": 8192, 00:23:49.633 "large_bufsize": 135168 00:23:49.633 } 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "sock", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "sock_set_default_impl", 00:23:49.633 "params": { 00:23:49.633 "impl_name": "posix" 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "sock_impl_set_options", 00:23:49.633 "params": { 00:23:49.633 "impl_name": "ssl", 00:23:49.633 "recv_buf_size": 4096, 00:23:49.633 "send_buf_size": 4096, 00:23:49.633 "enable_recv_pipe": true, 00:23:49.633 "enable_quickack": false, 00:23:49.633 "enable_placement_id": 0, 00:23:49.633 "enable_zerocopy_send_server": true, 00:23:49.633 "enable_zerocopy_send_client": false, 00:23:49.633 "zerocopy_threshold": 0, 00:23:49.633 "tls_version": 0, 00:23:49.633 "enable_ktls": false 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "sock_impl_set_options", 00:23:49.633 "params": { 00:23:49.633 "impl_name": "posix", 00:23:49.633 "recv_buf_size": 2097152, 00:23:49.633 "send_buf_size": 2097152, 00:23:49.633 "enable_recv_pipe": true, 00:23:49.633 "enable_quickack": false, 00:23:49.633 "enable_placement_id": 0, 00:23:49.633 "enable_zerocopy_send_server": true, 00:23:49.633 "enable_zerocopy_send_client": false, 00:23:49.633 "zerocopy_threshold": 0, 00:23:49.633 "tls_version": 0, 00:23:49.633 "enable_ktls": false 00:23:49.633 } 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "vmd", 00:23:49.633 "config": [] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "accel", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "accel_set_options", 00:23:49.633 "params": { 00:23:49.633 "small_cache_size": 128, 00:23:49.633 "large_cache_size": 16, 00:23:49.633 "task_count": 2048, 00:23:49.633 "sequence_count": 2048, 00:23:49.633 "buf_count": 2048 00:23:49.633 } 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "bdev", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "bdev_set_options", 00:23:49.633 "params": { 00:23:49.633 "bdev_io_pool_size": 65535, 00:23:49.633 "bdev_io_cache_size": 256, 00:23:49.633 "bdev_auto_examine": true, 00:23:49.633 "iobuf_small_cache_size": 128, 00:23:49.633 "iobuf_large_cache_size": 16 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_raid_set_options", 00:23:49.633 "params": { 00:23:49.633 "process_window_size_kb": 1024 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_iscsi_set_options", 00:23:49.633 "params": { 00:23:49.633 "timeout_sec": 30 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_nvme_set_options", 00:23:49.633 "params": { 00:23:49.633 "action_on_timeout": "none", 00:23:49.633 "timeout_us": 0, 00:23:49.633 "timeout_admin_us": 0, 00:23:49.633 "keep_alive_timeout_ms": 10000, 00:23:49.633 "arbitration_burst": 0, 00:23:49.633 "low_priority_weight": 0, 00:23:49.633 "medium_priority_weight": 0, 00:23:49.633 "high_priority_weight": 0, 00:23:49.633 "nvme_adminq_poll_period_us": 10000, 00:23:49.633 "nvme_ioq_poll_period_us": 0, 00:23:49.633 "io_queue_requests": 0, 00:23:49.633 "delay_cmd_submit": true, 00:23:49.633 "transport_retry_count": 4, 00:23:49.633 "bdev_retry_count": 3, 00:23:49.633 "transport_ack_timeout": 0, 00:23:49.633 "ctrlr_loss_timeout_sec": 0, 00:23:49.633 "reconnect_delay_sec": 0, 00:23:49.633 "fast_io_fail_timeout_sec": 0, 00:23:49.633 "disable_auto_failback": false, 00:23:49.633 "generate_uuids": false, 00:23:49.633 "transport_tos": 0, 00:23:49.633 "nvme_error_stat": false, 00:23:49.633 "rdma_srq_size": 0, 00:23:49.633 "io_path_stat": false, 00:23:49.633 "allow_accel_sequence": false, 00:23:49.633 "rdma_max_cq_size": 0, 00:23:49.633 "rdma_cm_event_timeout_ms": 0, 00:23:49.633 "dhchap_digests": [ 00:23:49.633 "sha256", 00:23:49.633 "sha384", 00:23:49.633 "sha512" 00:23:49.633 ], 00:23:49.633 "dhchap_dhgroups": [ 00:23:49.633 "null", 00:23:49.633 "ffdhe2048", 00:23:49.633 "ffdhe3072", 00:23:49.633 "ffdhe4096", 00:23:49.633 "ffdhe6144", 00:23:49.633 "ffdhe8192" 00:23:49.633 ] 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_nvme_set_hotplug", 00:23:49.633 "params": { 00:23:49.633 "period_us": 100000, 00:23:49.633 "enable": false 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_malloc_create", 00:23:49.633 "params": { 00:23:49.633 "name": "malloc0", 00:23:49.633 "num_blocks": 8192, 00:23:49.633 "block_size": 4096, 00:23:49.633 "physical_block_size": 4096, 00:23:49.633 "uuid": "741e254b-98f8-4c1b-86d0-2f3ab436a33e", 00:23:49.633 "optimal_io_boundary": 0 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "bdev_wait_for_examine" 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "nbd", 00:23:49.633 "config": [] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "scheduler", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "framework_set_scheduler", 00:23:49.633 "params": { 00:23:49.633 "name": "static" 00:23:49.633 } 00:23:49.633 } 00:23:49.633 ] 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "subsystem": "nvmf", 00:23:49.633 "config": [ 00:23:49.633 { 00:23:49.633 "method": "nvmf_set_config", 00:23:49.633 "params": { 00:23:49.633 "discovery_filter": "match_any", 00:23:49.633 "admin_cmd_passthru": { 00:23:49.633 "identify_ctrlr": false 00:23:49.633 } 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "nvmf_set_max_subsystems", 00:23:49.633 "params": { 00:23:49.633 "max_subsystems": 1024 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "nvmf_set_crdt", 00:23:49.633 "params": { 00:23:49.633 "crdt1": 0, 00:23:49.633 "crdt2": 0, 00:23:49.633 "crdt3": 0 00:23:49.633 } 00:23:49.633 }, 00:23:49.633 { 00:23:49.633 "method": "nvmf_create_transport", 00:23:49.633 "params": { 00:23:49.633 "trtype": "TCP", 00:23:49.633 "max_queue_depth": 128, 00:23:49.633 "max_io_qpairs_per_ctrlr": 127, 00:23:49.633 "in_capsule_data_size": 4096, 00:23:49.633 "max_io_size": 131072, 00:23:49.633 "io_unit_size": 131072, 00:23:49.634 "max_aq_depth": 128, 00:23:49.634 "num_shared_buffers": 511, 00:23:49.634 "buf_cache_size": 4294967295, 00:23:49.634 "dif_insert_or_strip": false, 00:23:49.634 "zcopy": false, 00:23:49.634 "c2h_success": false, 00:23:49.634 "sock_priority": 0, 00:23:49.634 "abort_timeout_sec": 1, 00:23:49.634 "ack_timeout": 0, 00:23:49.634 "data_wr_pool_size": 0 00:23:49.634 } 00:23:49.634 }, 00:23:49.634 { 00:23:49.634 "method": "nvmf_create_subsystem", 00:23:49.634 "params": { 00:23:49.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.634 "allow_any_host": false, 00:23:49.634 "serial_number": "00000000000000000000", 00:23:49.634 "model_number": "SPDK bdev Controller", 00:23:49.634 "max_namespaces": 32, 00:23:49.634 "min_cntlid": 1, 00:23:49.634 "max_cntlid": 65519, 00:23:49.634 "ana_reporting": false 00:23:49.634 } 00:23:49.634 }, 00:23:49.634 { 00:23:49.634 "method": "nvmf_subsystem_add_host", 00:23:49.634 "params": { 00:23:49.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.634 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.634 "psk": "key0" 00:23:49.634 } 00:23:49.634 }, 00:23:49.634 { 00:23:49.634 "method": "nvmf_subsystem_add_ns", 00:23:49.634 "params": { 00:23:49.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.634 "namespace": { 00:23:49.634 "nsid": 1, 00:23:49.634 "bdev_name": "malloc0", 00:23:49.634 "nguid": "741E254B98F84C1B86D02F3AB436A33E", 00:23:49.634 "uuid": "741e254b-98f8-4c1b-86d0-2f3ab436a33e", 00:23:49.634 "no_auto_visible": false 00:23:49.634 } 00:23:49.634 } 00:23:49.634 }, 00:23:49.634 { 00:23:49.634 "method": "nvmf_subsystem_add_listener", 00:23:49.634 "params": { 00:23:49.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.634 "listen_address": { 00:23:49.634 "trtype": "TCP", 00:23:49.634 "adrfam": "IPv4", 00:23:49.634 "traddr": "10.0.0.2", 00:23:49.634 "trsvcid": "4420" 00:23:49.634 }, 00:23:49.634 "secure_channel": true 00:23:49.634 } 00:23:49.634 } 00:23:49.634 ] 00:23:49.634 } 00:23:49.634 ] 00:23:49.634 }' 00:23:49.634 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:49.901 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:49.901 "subsystems": [ 00:23:49.901 { 00:23:49.901 "subsystem": "keyring", 00:23:49.901 "config": [ 00:23:49.901 { 00:23:49.901 "method": "keyring_file_add_key", 00:23:49.901 "params": { 00:23:49.901 "name": "key0", 00:23:49.901 "path": "/tmp/tmp.ncabER4Zno" 00:23:49.901 } 00:23:49.901 } 00:23:49.901 ] 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "subsystem": "iobuf", 00:23:49.901 "config": [ 00:23:49.901 { 00:23:49.901 "method": "iobuf_set_options", 00:23:49.901 "params": { 00:23:49.901 "small_pool_count": 8192, 00:23:49.901 "large_pool_count": 1024, 00:23:49.901 "small_bufsize": 8192, 00:23:49.901 "large_bufsize": 135168 00:23:49.901 } 00:23:49.901 } 00:23:49.901 ] 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "subsystem": "sock", 00:23:49.901 "config": [ 00:23:49.901 { 00:23:49.901 "method": "sock_set_default_impl", 00:23:49.901 "params": { 00:23:49.901 "impl_name": "posix" 00:23:49.901 } 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "method": "sock_impl_set_options", 00:23:49.901 "params": { 00:23:49.901 "impl_name": "ssl", 00:23:49.901 "recv_buf_size": 4096, 00:23:49.901 "send_buf_size": 4096, 00:23:49.901 "enable_recv_pipe": true, 00:23:49.901 "enable_quickack": false, 00:23:49.901 "enable_placement_id": 0, 00:23:49.901 "enable_zerocopy_send_server": true, 00:23:49.901 "enable_zerocopy_send_client": false, 00:23:49.901 "zerocopy_threshold": 0, 00:23:49.901 "tls_version": 0, 00:23:49.901 "enable_ktls": false 00:23:49.901 } 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "method": "sock_impl_set_options", 00:23:49.901 "params": { 00:23:49.901 "impl_name": "posix", 00:23:49.901 "recv_buf_size": 2097152, 00:23:49.901 "send_buf_size": 2097152, 00:23:49.901 "enable_recv_pipe": true, 00:23:49.901 "enable_quickack": false, 00:23:49.901 "enable_placement_id": 0, 00:23:49.901 "enable_zerocopy_send_server": true, 00:23:49.901 "enable_zerocopy_send_client": false, 00:23:49.901 "zerocopy_threshold": 0, 00:23:49.901 "tls_version": 0, 00:23:49.901 "enable_ktls": false 00:23:49.901 } 00:23:49.901 } 00:23:49.901 ] 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "subsystem": "vmd", 00:23:49.901 "config": [] 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "subsystem": "accel", 00:23:49.901 "config": [ 00:23:49.901 { 00:23:49.901 "method": "accel_set_options", 00:23:49.901 "params": { 00:23:49.901 "small_cache_size": 128, 00:23:49.901 "large_cache_size": 16, 00:23:49.901 "task_count": 2048, 00:23:49.901 "sequence_count": 2048, 00:23:49.901 "buf_count": 2048 00:23:49.901 } 00:23:49.901 } 00:23:49.901 ] 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "subsystem": "bdev", 00:23:49.901 "config": [ 00:23:49.901 { 00:23:49.901 "method": "bdev_set_options", 00:23:49.901 "params": { 00:23:49.901 "bdev_io_pool_size": 65535, 00:23:49.901 "bdev_io_cache_size": 256, 00:23:49.901 "bdev_auto_examine": true, 00:23:49.901 "iobuf_small_cache_size": 128, 00:23:49.901 "iobuf_large_cache_size": 16 00:23:49.901 } 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "method": "bdev_raid_set_options", 00:23:49.901 "params": { 00:23:49.901 "process_window_size_kb": 1024 00:23:49.901 } 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "method": "bdev_iscsi_set_options", 00:23:49.901 "params": { 00:23:49.901 "timeout_sec": 30 00:23:49.901 } 00:23:49.901 }, 00:23:49.901 { 00:23:49.901 "method": "bdev_nvme_set_options", 00:23:49.901 "params": { 00:23:49.901 "action_on_timeout": "none", 00:23:49.901 "timeout_us": 0, 00:23:49.901 "timeout_admin_us": 0, 00:23:49.902 "keep_alive_timeout_ms": 10000, 00:23:49.902 "arbitration_burst": 0, 00:23:49.902 "low_priority_weight": 0, 00:23:49.902 "medium_priority_weight": 0, 00:23:49.902 "high_priority_weight": 0, 00:23:49.902 "nvme_adminq_poll_period_us": 10000, 00:23:49.902 "nvme_ioq_poll_period_us": 0, 00:23:49.902 "io_queue_requests": 512, 00:23:49.902 "delay_cmd_submit": true, 00:23:49.902 "transport_retry_count": 4, 00:23:49.902 "bdev_retry_count": 3, 00:23:49.902 "transport_ack_timeout": 0, 00:23:49.902 "ctrlr_loss_timeout_sec": 0, 00:23:49.902 "reconnect_delay_sec": 0, 00:23:49.902 "fast_io_fail_timeout_sec": 0, 00:23:49.902 "disable_auto_failback": false, 00:23:49.902 "generate_uuids": false, 00:23:49.902 "transport_tos": 0, 00:23:49.902 "nvme_error_stat": false, 00:23:49.902 "rdma_srq_size": 0, 00:23:49.902 "io_path_stat": false, 00:23:49.902 "allow_accel_sequence": false, 00:23:49.902 "rdma_max_cq_size": 0, 00:23:49.902 "rdma_cm_event_timeout_ms": 0, 00:23:49.902 "dhchap_digests": [ 00:23:49.902 "sha256", 00:23:49.902 "sha384", 00:23:49.902 "sha512" 00:23:49.902 ], 00:23:49.902 "dhchap_dhgroups": [ 00:23:49.902 "null", 00:23:49.902 "ffdhe2048", 00:23:49.902 "ffdhe3072", 00:23:49.902 "ffdhe4096", 00:23:49.902 "ffdhe6144", 00:23:49.902 "ffdhe8192" 00:23:49.902 ] 00:23:49.902 } 00:23:49.902 }, 00:23:49.902 { 00:23:49.902 "method": "bdev_nvme_attach_controller", 00:23:49.902 "params": { 00:23:49.902 "name": "nvme0", 00:23:49.902 "trtype": "TCP", 00:23:49.902 "adrfam": "IPv4", 00:23:49.902 "traddr": "10.0.0.2", 00:23:49.902 "trsvcid": "4420", 00:23:49.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.902 "prchk_reftag": false, 00:23:49.902 "prchk_guard": false, 00:23:49.902 "ctrlr_loss_timeout_sec": 0, 00:23:49.902 "reconnect_delay_sec": 0, 00:23:49.902 "fast_io_fail_timeout_sec": 0, 00:23:49.902 "psk": "key0", 00:23:49.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.902 "hdgst": false, 00:23:49.902 "ddgst": false 00:23:49.902 } 00:23:49.902 }, 00:23:49.902 { 00:23:49.902 "method": "bdev_nvme_set_hotplug", 00:23:49.902 "params": { 00:23:49.902 "period_us": 100000, 00:23:49.902 "enable": false 00:23:49.902 } 00:23:49.902 }, 00:23:49.902 { 00:23:49.902 "method": "bdev_enable_histogram", 00:23:49.902 "params": { 00:23:49.902 "name": "nvme0n1", 00:23:49.902 "enable": true 00:23:49.902 } 00:23:49.902 }, 00:23:49.902 { 00:23:49.902 "method": "bdev_wait_for_examine" 00:23:49.902 } 00:23:49.902 ] 00:23:49.902 }, 00:23:49.902 { 00:23:49.902 "subsystem": "nbd", 00:23:49.902 "config": [] 00:23:49.902 } 00:23:49.902 ] 00:23:49.902 }' 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3641386 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3641386 ']' 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3641386 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.902 18:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641386 00:23:49.902 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.902 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.902 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641386' 00:23:49.902 killing process with pid 3641386 00:23:49.902 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3641386 00:23:49.902 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.902 00:23:49.902 Latency(us) 00:23:49.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.902 =================================================================================================================== 00:23:49.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.902 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3641386 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3641307 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3641307 ']' 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3641307 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641307 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641307' 00:23:50.160 killing process with pid 3641307 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3641307 00:23:50.160 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3641307 00:23:50.418 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:50.418 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.418 18:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:50.418 "subsystems": [ 00:23:50.418 { 00:23:50.418 "subsystem": "keyring", 00:23:50.418 "config": [ 00:23:50.418 { 00:23:50.418 "method": "keyring_file_add_key", 00:23:50.418 "params": { 00:23:50.418 "name": "key0", 00:23:50.418 "path": "/tmp/tmp.ncabER4Zno" 00:23:50.418 } 00:23:50.418 } 00:23:50.418 ] 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "subsystem": "iobuf", 00:23:50.418 "config": [ 00:23:50.418 { 00:23:50.418 "method": "iobuf_set_options", 00:23:50.418 "params": { 00:23:50.418 "small_pool_count": 8192, 00:23:50.418 "large_pool_count": 1024, 00:23:50.418 "small_bufsize": 8192, 00:23:50.418 "large_bufsize": 135168 00:23:50.418 } 00:23:50.418 } 00:23:50.418 ] 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "subsystem": "sock", 00:23:50.418 "config": [ 00:23:50.418 { 00:23:50.418 "method": "sock_set_default_impl", 00:23:50.418 "params": { 00:23:50.418 "impl_name": "posix" 00:23:50.418 } 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "method": "sock_impl_set_options", 00:23:50.418 "params": { 00:23:50.418 "impl_name": "ssl", 00:23:50.418 "recv_buf_size": 4096, 00:23:50.418 "send_buf_size": 4096, 00:23:50.418 "enable_recv_pipe": true, 00:23:50.418 "enable_quickack": false, 00:23:50.418 "enable_placement_id": 0, 00:23:50.418 "enable_zerocopy_send_server": true, 00:23:50.418 "enable_zerocopy_send_client": false, 00:23:50.418 "zerocopy_threshold": 0, 00:23:50.418 "tls_version": 0, 00:23:50.418 "enable_ktls": false 00:23:50.418 } 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "method": "sock_impl_set_options", 00:23:50.418 "params": { 00:23:50.418 "impl_name": "posix", 00:23:50.418 "recv_buf_size": 2097152, 00:23:50.418 "send_buf_size": 2097152, 00:23:50.418 "enable_recv_pipe": true, 00:23:50.418 "enable_quickack": false, 00:23:50.418 "enable_placement_id": 0, 00:23:50.418 "enable_zerocopy_send_server": true, 00:23:50.418 "enable_zerocopy_send_client": false, 00:23:50.418 "zerocopy_threshold": 0, 00:23:50.418 "tls_version": 0, 00:23:50.418 "enable_ktls": false 00:23:50.418 } 00:23:50.418 } 00:23:50.418 ] 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "subsystem": "vmd", 00:23:50.418 "config": [] 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "subsystem": "accel", 00:23:50.418 "config": [ 00:23:50.418 { 00:23:50.418 "method": "accel_set_options", 00:23:50.418 "params": { 00:23:50.418 "small_cache_size": 128, 00:23:50.418 "large_cache_size": 16, 00:23:50.418 "task_count": 2048, 00:23:50.418 "sequence_count": 2048, 00:23:50.418 "buf_count": 2048 00:23:50.418 } 00:23:50.418 } 00:23:50.418 ] 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "subsystem": "bdev", 00:23:50.418 "config": [ 00:23:50.418 { 00:23:50.418 "method": "bdev_set_options", 00:23:50.418 "params": { 00:23:50.418 "bdev_io_pool_size": 65535, 00:23:50.418 "bdev_io_cache_size": 256, 00:23:50.418 "bdev_auto_examine": true, 00:23:50.418 "iobuf_small_cache_size": 128, 00:23:50.418 "iobuf_large_cache_size": 16 00:23:50.418 } 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "method": "bdev_raid_set_options", 00:23:50.418 "params": { 00:23:50.418 "process_window_size_kb": 1024 00:23:50.418 } 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "method": "bdev_iscsi_set_options", 00:23:50.418 "params": { 00:23:50.418 "timeout_sec": 30 00:23:50.418 } 00:23:50.418 }, 00:23:50.418 { 00:23:50.418 "method": "bdev_nvme_set_options", 00:23:50.418 "params": { 00:23:50.419 "action_on_timeout": "none", 00:23:50.419 "timeout_us": 0, 00:23:50.419 "timeout_admin_us": 0, 00:23:50.419 "keep_alive_timeout_ms": 10000, 00:23:50.419 "arbitration_burst": 0, 00:23:50.419 "low_priority_weight": 0, 00:23:50.419 "medium_priority_weight": 0, 00:23:50.419 "high_priority_weight": 0, 00:23:50.419 "nvme_adminq_poll_period_us": 10000, 00:23:50.419 "nvme_ioq_poll_period_us": 0, 00:23:50.419 "io_queue_requests": 0, 00:23:50.419 "delay_cmd_submit": true, 00:23:50.419 "transport_retry_count": 4, 00:23:50.419 "bdev_retry_count": 3, 00:23:50.419 "transport_ack_timeout": 0, 00:23:50.419 "ctrlr_loss_timeout_sec": 0, 00:23:50.419 "reconnect_delay_sec": 0, 00:23:50.419 "fast_io_fail_timeout_sec": 0, 00:23:50.419 "disable_auto_failback": false, 00:23:50.419 "generate_uuids": false, 00:23:50.419 "transport_tos": 0, 00:23:50.419 "nvme_error_stat": false, 00:23:50.419 "rdma_srq_size": 0, 00:23:50.419 "io_path_stat": false, 00:23:50.419 "allow_accel_sequence": false, 00:23:50.419 "rdma_max_cq_size": 0, 00:23:50.419 "rdma_cm_event_timeout_ms": 0, 00:23:50.419 "dhchap_digests": [ 00:23:50.419 "sha256", 00:23:50.419 "sha384", 00:23:50.419 "sha512" 00:23:50.419 ], 00:23:50.419 "dhchap_dhgroups": [ 00:23:50.419 "null", 00:23:50.419 "ffdhe2048", 00:23:50.419 "ffdhe3072", 00:23:50.419 "ffdhe4096", 00:23:50.419 "ffdhe6144", 00:23:50.419 "ffdhe8192" 00:23:50.419 ] 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "bdev_nvme_set_hotplug", 00:23:50.419 "params": { 00:23:50.419 "period_us": 100000, 00:23:50.419 "enable": false 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "bdev_malloc_create", 00:23:50.419 "params": { 00:23:50.419 "name": "malloc0", 00:23:50.419 "num_blocks": 8192, 00:23:50.419 "block_size": 4096, 00:23:50.419 "physical_block_size": 4096, 00:23:50.419 "uuid": "741e254b-98f8-4c1b-86d0-2f3ab436a33e", 00:23:50.419 "optimal_io_boundary": 0 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "bdev_wait_for_examine" 00:23:50.419 } 00:23:50.419 ] 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "subsystem": "nbd", 00:23:50.419 "config": [] 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "subsystem": "scheduler", 00:23:50.419 "config": [ 00:23:50.419 { 00:23:50.419 "method": "framework_set_scheduler", 00:23:50.419 "params": { 00:23:50.419 "name": "static" 00:23:50.419 } 00:23:50.419 } 00:23:50.419 ] 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "subsystem": "nvmf", 00:23:50.419 "config": [ 00:23:50.419 { 00:23:50.419 "method": "nvmf_set_config", 00:23:50.419 "params": { 00:23:50.419 "discovery_filter": "match_any", 00:23:50.419 "admin_cmd_passthru": { 00:23:50.419 "identify_ctrlr": false 00:23:50.419 } 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_set_max_subsystems", 00:23:50.419 "params": { 00:23:50.419 "max_subsystems": 1024 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_set_crdt", 00:23:50.419 "params": { 00:23:50.419 "crdt1": 0, 00:23:50.419 "crdt2": 0, 00:23:50.419 "crdt3": 0 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_create_transport", 00:23:50.419 "params": { 00:23:50.419 "trtype": "TCP", 00:23:50.419 "max_queue_depth": 128, 00:23:50.419 "max_io_qpairs_per_ctrlr": 127, 00:23:50.419 "in_capsule_data_size": 4096, 00:23:50.419 "max_io_size": 131072, 00:23:50.419 "io_unit_size": 131072, 00:23:50.419 "max_aq_depth": 128, 00:23:50.419 "num_shared_buffers": 511, 00:23:50.419 "buf_cache_size": 4294967295, 00:23:50.419 "dif_insert_or_strip": false, 00:23:50.419 "zcopy": false, 00:23:50.419 "c2h_success": false, 00:23:50.419 "sock_priority": 0, 00:23:50.419 "abort_timeout_sec": 1, 00:23:50.419 "ack_timeout": 0, 00:23:50.419 "data_wr_pool_size": 0 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_create_subsystem", 00:23:50.419 "params": { 00:23:50.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.419 "allow_any_host": false, 00:23:50.419 "serial_number": "00000000000000000000", 00:23:50.419 "model_number": "SPDK bdev Controller", 00:23:50.419 "max_namespaces": 32, 00:23:50.419 "min_cntlid": 1, 00:23:50.419 "max_cntlid": 65519, 00:23:50.419 "ana_reporting": false 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_subsystem_add_host", 00:23:50.419 "params": { 00:23:50.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.419 "host": "nqn.2016-06.io.spdk:host1", 00:23:50.419 "psk": "key0" 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_subsystem_add_ns", 00:23:50.419 "params": { 00:23:50.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.419 "namespace": { 00:23:50.419 "nsid": 1, 00:23:50.419 "bdev_name": "malloc0", 00:23:50.419 "nguid": "741E254B98F84C1B86D02F3AB436A33E", 00:23:50.419 "uuid": "741e254b-98f8-4c1b-86d0-2f3ab436a33e", 00:23:50.419 "no_auto_visible": false 00:23:50.419 } 00:23:50.419 } 00:23:50.419 }, 00:23:50.419 { 00:23:50.419 "method": "nvmf_subsystem_add_listener", 00:23:50.419 "params": { 00:23:50.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.419 "listen_address": { 00:23:50.419 "trtype": "TCP", 00:23:50.419 "adrfam": "IPv4", 00:23:50.419 "traddr": "10.0.0.2", 00:23:50.419 "trsvcid": "4420" 00:23:50.419 }, 00:23:50.419 "secure_channel": true 00:23:50.419 } 00:23:50.419 } 00:23:50.419 ] 00:23:50.419 } 00:23:50.419 ] 00:23:50.419 }' 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3641739 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3641739 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3641739 ']' 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.419 18:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.419 [2024-07-14 18:55:38.577413] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:50.419 [2024-07-14 18:55:38.577490] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.419 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.419 [2024-07-14 18:55:38.639658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.678 [2024-07-14 18:55:38.723046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.678 [2024-07-14 18:55:38.723098] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.678 [2024-07-14 18:55:38.723112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.678 [2024-07-14 18:55:38.723123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.678 [2024-07-14 18:55:38.723132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.678 [2024-07-14 18:55:38.723212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.936 [2024-07-14 18:55:38.957677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.936 [2024-07-14 18:55:38.989699] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.936 [2024-07-14 18:55:38.997044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3641887 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3641887 /var/tmp/bdevperf.sock 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3641887 ']' 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.502 18:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:51.502 "subsystems": [ 00:23:51.502 { 00:23:51.502 "subsystem": "keyring", 00:23:51.502 "config": [ 00:23:51.502 { 00:23:51.502 "method": "keyring_file_add_key", 00:23:51.502 "params": { 00:23:51.502 "name": "key0", 00:23:51.502 "path": "/tmp/tmp.ncabER4Zno" 00:23:51.502 } 00:23:51.502 } 00:23:51.502 ] 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "subsystem": "iobuf", 00:23:51.502 "config": [ 00:23:51.502 { 00:23:51.502 "method": "iobuf_set_options", 00:23:51.502 "params": { 00:23:51.502 "small_pool_count": 8192, 00:23:51.502 "large_pool_count": 1024, 00:23:51.502 "small_bufsize": 8192, 00:23:51.502 "large_bufsize": 135168 00:23:51.502 } 00:23:51.502 } 00:23:51.502 ] 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "subsystem": "sock", 00:23:51.502 "config": [ 00:23:51.502 { 00:23:51.502 "method": "sock_set_default_impl", 00:23:51.502 "params": { 00:23:51.502 "impl_name": "posix" 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "sock_impl_set_options", 00:23:51.502 "params": { 00:23:51.502 "impl_name": "ssl", 00:23:51.502 "recv_buf_size": 4096, 00:23:51.502 "send_buf_size": 4096, 00:23:51.502 "enable_recv_pipe": true, 00:23:51.502 "enable_quickack": false, 00:23:51.502 "enable_placement_id": 0, 00:23:51.502 "enable_zerocopy_send_server": true, 00:23:51.502 "enable_zerocopy_send_client": false, 00:23:51.502 "zerocopy_threshold": 0, 00:23:51.502 "tls_version": 0, 00:23:51.502 "enable_ktls": false 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "sock_impl_set_options", 00:23:51.502 "params": { 00:23:51.502 "impl_name": "posix", 00:23:51.502 "recv_buf_size": 2097152, 00:23:51.502 "send_buf_size": 2097152, 00:23:51.502 "enable_recv_pipe": true, 00:23:51.502 "enable_quickack": false, 00:23:51.502 "enable_placement_id": 0, 00:23:51.502 "enable_zerocopy_send_server": true, 00:23:51.502 "enable_zerocopy_send_client": false, 00:23:51.502 "zerocopy_threshold": 0, 00:23:51.502 "tls_version": 0, 00:23:51.502 "enable_ktls": false 00:23:51.502 } 00:23:51.502 } 00:23:51.502 ] 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "subsystem": "vmd", 00:23:51.502 "config": [] 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "subsystem": "accel", 00:23:51.502 "config": [ 00:23:51.502 { 00:23:51.502 "method": "accel_set_options", 00:23:51.502 "params": { 00:23:51.502 "small_cache_size": 128, 00:23:51.502 "large_cache_size": 16, 00:23:51.502 "task_count": 2048, 00:23:51.502 "sequence_count": 2048, 00:23:51.502 "buf_count": 2048 00:23:51.502 } 00:23:51.502 } 00:23:51.502 ] 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "subsystem": "bdev", 00:23:51.502 "config": [ 00:23:51.502 { 00:23:51.502 "method": "bdev_set_options", 00:23:51.502 "params": { 00:23:51.502 "bdev_io_pool_size": 65535, 00:23:51.502 "bdev_io_cache_size": 256, 00:23:51.502 "bdev_auto_examine": true, 00:23:51.502 "iobuf_small_cache_size": 128, 00:23:51.502 "iobuf_large_cache_size": 16 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_raid_set_options", 00:23:51.502 "params": { 00:23:51.502 "process_window_size_kb": 1024 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_iscsi_set_options", 00:23:51.502 "params": { 00:23:51.502 "timeout_sec": 30 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_nvme_set_options", 00:23:51.502 "params": { 00:23:51.502 "action_on_timeout": "none", 00:23:51.502 "timeout_us": 0, 00:23:51.502 "timeout_admin_us": 0, 00:23:51.502 "keep_alive_timeout_ms": 10000, 00:23:51.502 "arbitration_burst": 0, 00:23:51.502 "low_priority_weight": 0, 00:23:51.502 "medium_priority_weight": 0, 00:23:51.502 "high_priority_weight": 0, 00:23:51.502 "nvme_adminq_poll_period_us": 10000, 00:23:51.502 "nvme_ioq_poll_period_us": 0, 00:23:51.502 "io_queue_requests": 512, 00:23:51.502 "delay_cmd_submit": true, 00:23:51.502 "transport_retry_count": 4, 00:23:51.502 "bdev_retry_count": 3, 00:23:51.502 "transport_ack_timeout": 0, 00:23:51.502 "ctrlr_loss_timeout_sec": 0, 00:23:51.502 "reconnect_delay_sec": 0, 00:23:51.502 "fast_io_fail_timeout_sec": 0, 00:23:51.502 "disable_auto_failback": false, 00:23:51.502 "generate_uuids": false, 00:23:51.502 "transport_tos": 0, 00:23:51.502 "nvme_error_stat": false, 00:23:51.502 "rdma_srq_size": 0, 00:23:51.502 "io_path_stat": false, 00:23:51.502 "allow_accel_sequence": false, 00:23:51.502 "rdma_max_cq_size": 0, 00:23:51.502 "rdma_cm_event_timeout_ms": 0, 00:23:51.502 "dhchap_digests": [ 00:23:51.502 "sha256", 00:23:51.502 "sha384", 00:23:51.502 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.502 a512" 00:23:51.502 ], 00:23:51.502 "dhchap_dhgroups": [ 00:23:51.502 "null", 00:23:51.502 "ffdhe2048", 00:23:51.502 "ffdhe3072", 00:23:51.502 "ffdhe4096", 00:23:51.502 "ffdhe6144", 00:23:51.502 "ffdhe8192" 00:23:51.502 ] 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_nvme_attach_controller", 00:23:51.502 "params": { 00:23:51.502 "name": "nvme0", 00:23:51.502 "trtype": "TCP", 00:23:51.502 "adrfam": "IPv4", 00:23:51.502 "traddr": "10.0.0.2", 00:23:51.502 "trsvcid": "4420", 00:23:51.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.502 "prchk_reftag": false, 00:23:51.502 "prchk_guard": false, 00:23:51.502 "ctrlr_loss_timeout_sec": 0, 00:23:51.502 "reconnect_delay_sec": 0, 00:23:51.502 "fast_io_fail_timeout_sec": 0, 00:23:51.502 "psk": "key0", 00:23:51.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.502 "hdgst": false, 00:23:51.502 "ddgst": false 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_nvme_set_hotplug", 00:23:51.502 "params": { 00:23:51.502 "period_us": 100000, 00:23:51.502 "enable": false 00:23:51.502 } 00:23:51.502 }, 00:23:51.502 { 00:23:51.502 "method": "bdev_enable_histogram", 00:23:51.503 "params": { 00:23:51.503 "name": "nvme0n1", 00:23:51.503 "enable": true 00:23:51.503 } 00:23:51.503 }, 00:23:51.503 { 00:23:51.503 "method": "bdev_wait_for_examine" 00:23:51.503 } 00:23:51.503 ] 00:23:51.503 }, 00:23:51.503 { 00:23:51.503 "subsystem": "nbd", 00:23:51.503 "config": [] 00:23:51.503 } 00:23:51.503 ] 00:23:51.503 }' 00:23:51.503 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.503 18:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.503 [2024-07-14 18:55:39.631639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:51.503 [2024-07-14 18:55:39.631718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641887 ] 00:23:51.503 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.503 [2024-07-14 18:55:39.697666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.760 [2024-07-14 18:55:39.788542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.760 [2024-07-14 18:55:39.966898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.693 18:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.950 Running I/O for 1 seconds... 00:23:53.884 00:23:53.884 Latency(us) 00:23:53.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.884 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:53.884 Verification LBA range: start 0x0 length 0x2000 00:23:53.884 nvme0n1 : 1.03 3221.80 12.59 0.00 0.00 39234.45 6213.78 37865.24 00:23:53.884 =================================================================================================================== 00:23:53.884 Total : 3221.80 12.59 0.00 0.00 39234.45 6213.78 37865.24 00:23:53.884 0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:53.884 nvmf_trace.0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3641887 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3641887 ']' 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3641887 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.884 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641887 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641887' 00:23:54.142 killing process with pid 3641887 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3641887 00:23:54.142 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.142 00:23:54.142 Latency(us) 00:23:54.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.142 =================================================================================================================== 00:23:54.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3641887 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.142 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.142 rmmod nvme_tcp 00:23:54.142 rmmod nvme_fabrics 00:23:54.400 rmmod nvme_keyring 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3641739 ']' 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3641739 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3641739 ']' 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3641739 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641739 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641739' 00:23:54.400 killing process with pid 3641739 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3641739 00:23:54.400 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3641739 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.659 18:55:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.582 18:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.582 18:55:44 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aZ9W0PF70y /tmp/tmp.pB3r6MXD8H /tmp/tmp.ncabER4Zno 00:23:56.582 00:23:56.582 real 1m19.343s 00:23:56.582 user 2m9.579s 00:23:56.582 sys 0m25.225s 00:23:56.582 18:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.582 18:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 ************************************ 00:23:56.582 END TEST nvmf_tls 00:23:56.582 ************************************ 00:23:56.582 18:55:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:56.582 18:55:44 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.582 18:55:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:56.582 18:55:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.582 18:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 ************************************ 00:23:56.582 START TEST nvmf_fips 00:23:56.582 ************************************ 00:23:56.582 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.582 * Looking for test storage... 00:23:56.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:56.582 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.582 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.841 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:56.842 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:56.843 Error setting digest 00:23:56.843 00A20E7F717F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:56.843 00A20E7F717F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.843 18:55:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:58.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:58.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.738 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:58.739 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:58.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.739 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.997 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.997 18:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:23:58.997 00:23:58.997 --- 10.0.0.2 ping statistics --- 00:23:58.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.997 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:23:58.997 00:23:58.997 --- 10.0.0.1 ping statistics --- 00:23:58.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.997 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3644247 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3644247 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3644247 ']' 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.997 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.997 [2024-07-14 18:55:47.117059] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:58.997 [2024-07-14 18:55:47.117134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.997 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.998 [2024-07-14 18:55:47.185235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.256 [2024-07-14 18:55:47.279453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.256 [2024-07-14 18:55:47.279521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.256 [2024-07-14 18:55:47.279538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.256 [2024-07-14 18:55:47.279551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.256 [2024-07-14 18:55:47.279563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.256 [2024-07-14 18:55:47.279606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:59.256 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.514 [2024-07-14 18:55:47.659100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.514 [2024-07-14 18:55:47.675071] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.514 [2024-07-14 18:55:47.675307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.514 [2024-07-14 18:55:47.706901] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:59.514 malloc0 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3644273 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3644273 /var/tmp/bdevperf.sock 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3644273 ']' 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.514 18:55:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.772 [2024-07-14 18:55:47.799554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:59.772 [2024-07-14 18:55:47.799639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644273 ] 00:23:59.773 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.773 [2024-07-14 18:55:47.859744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.773 [2024-07-14 18:55:47.950848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.031 18:55:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.031 18:55:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:00.031 18:55:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.289 [2024-07-14 18:55:48.292898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.289 [2024-07-14 18:55:48.293036] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:00.289 TLSTESTn1 00:24:00.289 18:55:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.289 Running I/O for 10 seconds... 00:24:12.481 00:24:12.481 Latency(us) 00:24:12.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.481 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.481 Verification LBA range: start 0x0 length 0x2000 00:24:12.481 TLSTESTn1 : 10.02 3537.68 13.82 0.00 0.00 36118.88 6068.15 50098.63 00:24:12.481 =================================================================================================================== 00:24:12.481 Total : 3537.68 13.82 0.00 0.00 36118.88 6068.15 50098.63 00:24:12.481 0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:12.481 nvmf_trace.0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3644273 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3644273 ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3644273 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3644273 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3644273' 00:24:12.481 killing process with pid 3644273 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3644273 00:24:12.481 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.481 00:24:12.481 Latency(us) 00:24:12.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.481 =================================================================================================================== 00:24:12.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.481 [2024-07-14 18:55:58.622020] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3644273 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.481 rmmod nvme_tcp 00:24:12.481 rmmod nvme_fabrics 00:24:12.481 rmmod nvme_keyring 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3644247 ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3644247 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3644247 ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3644247 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3644247 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3644247' 00:24:12.481 killing process with pid 3644247 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3644247 00:24:12.481 [2024-07-14 18:55:58.898236] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:12.481 18:55:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3644247 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.481 18:55:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.048 18:56:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.048 18:56:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:13.048 00:24:13.048 real 0m16.419s 00:24:13.048 user 0m21.295s 00:24:13.048 sys 0m5.393s 00:24:13.048 18:56:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.048 18:56:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.048 ************************************ 00:24:13.048 END TEST nvmf_fips 00:24:13.048 ************************************ 00:24:13.048 18:56:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:13.048 18:56:01 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:13.048 18:56:01 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:13.048 18:56:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:13.048 18:56:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.048 18:56:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.048 ************************************ 00:24:13.048 START TEST nvmf_fuzz 00:24:13.048 ************************************ 00:24:13.048 18:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:13.048 * Looking for test storage... 00:24:13.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.306 18:56:01 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:15.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:15.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:15.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:15.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.205 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:24:15.463 00:24:15.463 --- 10.0.0.2 ping statistics --- 00:24:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.463 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:24:15.463 00:24:15.463 --- 10.0.0.1 ping statistics --- 00:24:15.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.463 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3647520 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3647520 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3647520 ']' 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.463 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 Malloc0 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.721 18:56:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.722 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:15.722 18:56:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:47.819 Fuzzing completed. Shutting down the fuzz application 00:24:47.819 00:24:47.819 Dumping successful admin opcodes: 00:24:47.819 8, 9, 10, 24, 00:24:47.819 Dumping successful io opcodes: 00:24:47.819 0, 9, 00:24:47.819 NS: 0x200003aeff00 I/O qp, Total commands completed: 491379, total successful commands: 2829, random_seed: 1780789504 00:24:47.819 NS: 0x200003aeff00 admin qp, Total commands completed: 59136, total successful commands: 470, random_seed: 2117408960 00:24:47.819 18:56:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:47.819 Fuzzing completed. Shutting down the fuzz application 00:24:47.819 00:24:47.819 Dumping successful admin opcodes: 00:24:47.819 24, 00:24:47.819 Dumping successful io opcodes: 00:24:47.819 00:24:47.819 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1544990496 00:24:47.819 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1545116256 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.819 rmmod nvme_tcp 00:24:47.819 rmmod nvme_fabrics 00:24:47.819 rmmod nvme_keyring 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3647520 ']' 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3647520 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3647520 ']' 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 3647520 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3647520 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3647520' 00:24:47.819 killing process with pid 3647520 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 3647520 00:24:47.819 18:56:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 3647520 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.075 18:56:36 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.977 18:56:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.977 18:56:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:49.977 00:24:49.977 real 0m36.915s 00:24:49.977 user 0m51.750s 00:24:49.977 sys 0m14.815s 00:24:49.977 18:56:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.977 18:56:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.977 ************************************ 00:24:49.977 END TEST nvmf_fuzz 00:24:49.977 ************************************ 00:24:49.977 18:56:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:49.977 18:56:38 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:49.977 18:56:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:49.977 18:56:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.977 18:56:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.977 ************************************ 00:24:49.977 START TEST nvmf_multiconnection 00:24:49.977 ************************************ 00:24:49.977 18:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:50.235 * Looking for test storage... 00:24:50.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.235 18:56:38 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.236 18:56:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:52.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:52.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:52.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:52.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:24:52.137 00:24:52.137 --- 10.0.0.2 ping statistics --- 00:24:52.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.137 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:24:52.137 00:24:52.137 --- 10.0.0.1 ping statistics --- 00:24:52.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.137 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3653133 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3653133 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 3653133 ']' 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.137 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.394 [2024-07-14 18:56:40.368539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:52.394 [2024-07-14 18:56:40.368616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.394 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.395 [2024-07-14 18:56:40.439238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.395 [2024-07-14 18:56:40.532927] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.395 [2024-07-14 18:56:40.532982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.395 [2024-07-14 18:56:40.532999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.395 [2024-07-14 18:56:40.533013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.395 [2024-07-14 18:56:40.533024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.395 [2024-07-14 18:56:40.533091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.395 [2024-07-14 18:56:40.533144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.395 [2024-07-14 18:56:40.533269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.395 [2024-07-14 18:56:40.533273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.652 [2024-07-14 18:56:40.691678] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.652 Malloc1 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.652 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 [2024-07-14 18:56:40.749047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 Malloc2 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 Malloc3 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.653 Malloc4 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.653 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 Malloc5 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 Malloc6 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 Malloc7 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.911 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.912 Malloc8 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.912 Malloc9 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.912 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 Malloc10 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 Malloc11 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.169 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:53.734 18:56:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:53.734 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.734 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.734 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.734 18:56:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.276 18:56:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:56.533 18:56:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:56.533 18:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:56.533 18:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:56.533 18:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:56.533 18:56:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.058 18:56:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:59.316 18:56:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:59.316 18:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:59.316 18:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:59.316 18:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:59.316 18:56:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.212 18:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:01.780 18:56:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:01.780 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.780 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.780 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.780 18:56:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.355 18:56:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.355 18:56:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:04.613 18:56:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:04.613 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.613 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.613 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.613 18:56:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.538 18:56:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.539 18:56:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.539 18:56:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:07.472 18:56:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:07.472 18:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:07.472 18:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:07.472 18:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:07.472 18:56:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:09.375 18:56:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.376 18:56:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:10.308 18:56:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:10.308 18:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.308 18:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.308 18:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.308 18:56:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.213 18:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:12.779 18:57:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:12.779 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:12.779 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.779 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:12.779 18:57:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.308 18:57:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:15.565 18:57:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:15.565 18:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.565 18:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.565 18:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.565 18:57:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.091 18:57:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:18.655 18:57:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:18.655 18:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:18.655 18:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.655 18:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:18.655 18:57:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.549 18:57:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:21.481 18:57:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:21.481 18:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:21.481 18:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:21.481 18:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:21.481 18:57:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:23.377 18:57:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:23.377 [global] 00:25:23.378 thread=1 00:25:23.378 invalidate=1 00:25:23.378 rw=read 00:25:23.378 time_based=1 00:25:23.378 runtime=10 00:25:23.378 ioengine=libaio 00:25:23.378 direct=1 00:25:23.378 bs=262144 00:25:23.378 iodepth=64 00:25:23.378 norandommap=1 00:25:23.378 numjobs=1 00:25:23.378 00:25:23.378 [job0] 00:25:23.378 filename=/dev/nvme0n1 00:25:23.378 [job1] 00:25:23.378 filename=/dev/nvme10n1 00:25:23.378 [job2] 00:25:23.378 filename=/dev/nvme1n1 00:25:23.378 [job3] 00:25:23.378 filename=/dev/nvme2n1 00:25:23.378 [job4] 00:25:23.378 filename=/dev/nvme3n1 00:25:23.378 [job5] 00:25:23.378 filename=/dev/nvme4n1 00:25:23.378 [job6] 00:25:23.378 filename=/dev/nvme5n1 00:25:23.378 [job7] 00:25:23.378 filename=/dev/nvme6n1 00:25:23.378 [job8] 00:25:23.378 filename=/dev/nvme7n1 00:25:23.378 [job9] 00:25:23.378 filename=/dev/nvme8n1 00:25:23.378 [job10] 00:25:23.378 filename=/dev/nvme9n1 00:25:23.635 Could not set queue depth (nvme0n1) 00:25:23.635 Could not set queue depth (nvme10n1) 00:25:23.635 Could not set queue depth (nvme1n1) 00:25:23.635 Could not set queue depth (nvme2n1) 00:25:23.635 Could not set queue depth (nvme3n1) 00:25:23.635 Could not set queue depth (nvme4n1) 00:25:23.635 Could not set queue depth (nvme5n1) 00:25:23.635 Could not set queue depth (nvme6n1) 00:25:23.635 Could not set queue depth (nvme7n1) 00:25:23.635 Could not set queue depth (nvme8n1) 00:25:23.635 Could not set queue depth (nvme9n1) 00:25:23.635 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.635 fio-3.35 00:25:23.635 Starting 11 threads 00:25:35.870 00:25:35.870 job0: (groupid=0, jobs=1): err= 0: pid=3657973: Sun Jul 14 18:57:22 2024 00:25:35.870 read: IOPS=669, BW=167MiB/s (176MB/s)(1690MiB/10093msec) 00:25:35.870 slat (usec): min=9, max=115579, avg=1135.85, stdev=4739.16 00:25:35.870 clat (msec): min=2, max=262, avg=94.37, stdev=53.29 00:25:35.870 lat (msec): min=2, max=262, avg=95.50, stdev=54.20 00:25:35.870 clat percentiles (msec): 00:25:35.870 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 43], 00:25:35.870 | 30.00th=[ 60], 40.00th=[ 73], 50.00th=[ 89], 60.00th=[ 101], 00:25:35.870 | 70.00th=[ 121], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 192], 00:25:35.870 | 99.00th=[ 209], 99.50th=[ 218], 99.90th=[ 239], 99.95th=[ 249], 00:25:35.870 | 99.99th=[ 262] 00:25:35.870 bw ( KiB/s): min=83968, max=279504, per=8.75%, avg=171370.85, stdev=60096.56, samples=20 00:25:35.870 iops : min= 328, max= 1091, avg=669.35, stdev=234.70, samples=20 00:25:35.870 lat (msec) : 4=0.55%, 10=2.07%, 20=2.63%, 50=18.11%, 100=36.62% 00:25:35.870 lat (msec) : 250=39.98%, 500=0.04% 00:25:35.870 cpu : usr=0.32%, sys=2.00%, ctx=1386, majf=0, minf=4097 00:25:35.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.870 issued rwts: total=6759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.870 job1: (groupid=0, jobs=1): err= 0: pid=3657974: Sun Jul 14 18:57:22 2024 00:25:35.870 read: IOPS=736, BW=184MiB/s (193MB/s)(1859MiB/10092msec) 00:25:35.870 slat (usec): min=9, max=41704, avg=1076.57, stdev=3591.69 00:25:35.870 clat (msec): min=4, max=293, avg=85.71, stdev=46.12 00:25:35.870 lat (msec): min=4, max=293, avg=86.79, stdev=46.71 00:25:35.870 clat percentiles (msec): 00:25:35.870 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 52], 00:25:35.870 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 87], 00:25:35.870 | 70.00th=[ 101], 80.00th=[ 120], 90.00th=[ 159], 95.00th=[ 180], 00:25:35.870 | 99.00th=[ 205], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 284], 00:25:35.870 | 99.99th=[ 292] 00:25:35.870 bw ( KiB/s): min=93184, max=264192, per=9.63%, avg=188675.45, stdev=59961.08, samples=20 00:25:35.870 iops : min= 364, max= 1032, avg=736.95, stdev=234.15, samples=20 00:25:35.870 lat (msec) : 10=0.30%, 20=3.71%, 50=14.21%, 100=51.39%, 250=30.02% 00:25:35.870 lat (msec) : 500=0.38% 00:25:35.870 cpu : usr=0.31%, sys=2.23%, ctx=1437, majf=0, minf=4097 00:25:35.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.870 issued rwts: total=7436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.870 job2: (groupid=0, jobs=1): err= 0: pid=3657975: Sun Jul 14 18:57:22 2024 00:25:35.870 read: IOPS=745, BW=186MiB/s (195MB/s)(1868MiB/10025msec) 00:25:35.870 slat (usec): min=11, max=53135, avg=1257.72, stdev=3946.00 00:25:35.870 clat (msec): min=2, max=217, avg=84.53, stdev=43.03 00:25:35.870 lat (msec): min=2, max=235, avg=85.79, stdev=43.73 00:25:35.870 clat percentiles (msec): 00:25:35.870 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 35], 00:25:35.870 | 30.00th=[ 57], 40.00th=[ 73], 50.00th=[ 86], 60.00th=[ 94], 00:25:35.870 | 70.00th=[ 105], 80.00th=[ 117], 90.00th=[ 144], 95.00th=[ 163], 00:25:35.870 | 99.00th=[ 192], 99.50th=[ 203], 99.90th=[ 218], 99.95th=[ 218], 00:25:35.870 | 99.99th=[ 218] 00:25:35.870 bw ( KiB/s): min=89600, max=495104, per=9.68%, avg=189619.30, stdev=100485.26, samples=20 00:25:35.870 iops : min= 350, max= 1934, avg=740.65, stdev=392.49, samples=20 00:25:35.870 lat (msec) : 4=0.12%, 10=0.33%, 20=0.58%, 50=26.03%, 100=38.84% 00:25:35.870 lat (msec) : 250=34.10% 00:25:35.870 cpu : usr=0.41%, sys=2.51%, ctx=1325, majf=0, minf=4097 00:25:35.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.870 issued rwts: total=7472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.870 job3: (groupid=0, jobs=1): err= 0: pid=3657976: Sun Jul 14 18:57:22 2024 00:25:35.870 read: IOPS=570, BW=143MiB/s (150MB/s)(1430MiB/10018msec) 00:25:35.870 slat (usec): min=10, max=104360, avg=1229.21, stdev=5412.45 00:25:35.870 clat (usec): min=805, max=295962, avg=110795.78, stdev=60091.02 00:25:35.870 lat (usec): min=870, max=296013, avg=112024.99, stdev=61046.16 00:25:35.870 clat percentiles (msec): 00:25:35.870 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 46], 00:25:35.870 | 30.00th=[ 78], 40.00th=[ 102], 50.00th=[ 120], 60.00th=[ 136], 00:25:35.870 | 70.00th=[ 153], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 199], 00:25:35.870 | 99.00th=[ 218], 99.50th=[ 222], 99.90th=[ 232], 99.95th=[ 296], 00:25:35.870 | 99.99th=[ 296] 00:25:35.870 bw ( KiB/s): min=83456, max=291840, per=7.39%, avg=144737.05, stdev=60422.34, samples=20 00:25:35.870 iops : min= 326, max= 1140, avg=565.30, stdev=236.02, samples=20 00:25:35.870 lat (usec) : 1000=0.05% 00:25:35.870 lat (msec) : 2=0.72%, 4=0.61%, 10=4.28%, 20=4.20%, 50=11.44% 00:25:35.870 lat (msec) : 100=18.12%, 250=60.53%, 500=0.05% 00:25:35.870 cpu : usr=0.31%, sys=1.98%, ctx=1359, majf=0, minf=3721 00:25:35.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:35.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.870 issued rwts: total=5718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.870 job4: (groupid=0, jobs=1): err= 0: pid=3657977: Sun Jul 14 18:57:22 2024 00:25:35.870 read: IOPS=601, BW=150MiB/s (158MB/s)(1522MiB/10129msec) 00:25:35.870 slat (usec): min=8, max=132289, avg=1431.25, stdev=5039.19 00:25:35.870 clat (msec): min=2, max=297, avg=104.95, stdev=48.14 00:25:35.870 lat (msec): min=2, max=326, avg=106.38, stdev=48.87 00:25:35.870 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 71], 00:25:35.871 | 30.00th=[ 86], 40.00th=[ 94], 50.00th=[ 103], 60.00th=[ 111], 00:25:35.871 | 70.00th=[ 124], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 190], 00:25:35.871 | 99.00th=[ 224], 99.50th=[ 245], 99.90th=[ 249], 99.95th=[ 251], 00:25:35.871 | 99.99th=[ 296] 00:25:35.871 bw ( KiB/s): min=96768, max=205824, per=7.87%, avg=154204.50, stdev=35020.75, samples=20 00:25:35.871 iops : min= 378, max= 804, avg=602.30, stdev=136.76, samples=20 00:25:35.871 lat (msec) : 4=0.15%, 10=0.97%, 20=3.20%, 50=10.50%, 100=33.33% 00:25:35.871 lat (msec) : 250=51.77%, 500=0.08% 00:25:35.871 cpu : usr=0.43%, sys=1.82%, ctx=1200, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=6088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job5: (groupid=0, jobs=1): err= 0: pid=3657978: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=920, BW=230MiB/s (241MB/s)(2326MiB/10104msec) 00:25:35.871 slat (usec): min=9, max=143437, avg=967.25, stdev=4195.63 00:25:35.871 clat (usec): min=1651, max=346785, avg=68470.44, stdev=51127.60 00:25:35.871 lat (usec): min=1681, max=346817, avg=69437.69, stdev=51624.84 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 34], 00:25:35.871 | 30.00th=[ 44], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 62], 00:25:35.871 | 70.00th=[ 73], 80.00th=[ 90], 90.00th=[ 133], 95.00th=[ 186], 00:25:35.871 | 99.00th=[ 268], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 317], 00:25:35.871 | 99.99th=[ 347] 00:25:35.871 bw ( KiB/s): min=88576, max=531456, per=12.07%, avg=236520.30, stdev=101701.70, samples=20 00:25:35.871 iops : min= 346, max= 2076, avg=923.85, stdev=397.26, samples=20 00:25:35.871 lat (msec) : 2=0.01%, 4=0.21%, 10=3.57%, 20=5.04%, 50=32.68% 00:25:35.871 lat (msec) : 100=43.32%, 250=13.59%, 500=1.57% 00:25:35.871 cpu : usr=0.46%, sys=2.87%, ctx=1612, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=9305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job6: (groupid=0, jobs=1): err= 0: pid=3657979: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=684, BW=171MiB/s (179MB/s)(1728MiB/10095msec) 00:25:35.871 slat (usec): min=10, max=131676, avg=1209.03, stdev=4895.12 00:25:35.871 clat (msec): min=5, max=257, avg=92.18, stdev=56.31 00:25:35.871 lat (msec): min=5, max=376, avg=93.39, stdev=57.20 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 33], 00:25:35.871 | 30.00th=[ 45], 40.00th=[ 63], 50.00th=[ 80], 60.00th=[ 108], 00:25:35.871 | 70.00th=[ 128], 80.00th=[ 153], 90.00th=[ 176], 95.00th=[ 188], 00:25:35.871 | 99.00th=[ 211], 99.50th=[ 228], 99.90th=[ 255], 99.95th=[ 257], 00:25:35.871 | 99.99th=[ 257] 00:25:35.871 bw ( KiB/s): min=84992, max=516096, per=8.95%, avg=175309.40, stdev=102718.13, samples=20 00:25:35.871 iops : min= 332, max= 2016, avg=684.80, stdev=401.24, samples=20 00:25:35.871 lat (msec) : 10=0.52%, 20=1.56%, 50=31.12%, 100=23.67%, 250=42.88% 00:25:35.871 lat (msec) : 500=0.25% 00:25:35.871 cpu : usr=0.43%, sys=2.14%, ctx=1448, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=6912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job7: (groupid=0, jobs=1): err= 0: pid=3657980: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=825, BW=206MiB/s (216MB/s)(2089MiB/10121msec) 00:25:35.871 slat (usec): min=8, max=131520, avg=745.33, stdev=4574.44 00:25:35.871 clat (usec): min=754, max=310127, avg=76702.88, stdev=56297.21 00:25:35.871 lat (usec): min=770, max=310145, avg=77448.21, stdev=56999.28 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 28], 00:25:35.871 | 30.00th=[ 34], 40.00th=[ 48], 50.00th=[ 61], 60.00th=[ 80], 00:25:35.871 | 70.00th=[ 104], 80.00th=[ 133], 90.00th=[ 161], 95.00th=[ 186], 00:25:35.871 | 99.00th=[ 218], 99.50th=[ 232], 99.90th=[ 275], 99.95th=[ 275], 00:25:35.871 | 99.99th=[ 309] 00:25:35.871 bw ( KiB/s): min=93184, max=510976, per=10.84%, avg=212293.05, stdev=112755.38, samples=20 00:25:35.871 iops : min= 364, max= 1996, avg=829.20, stdev=440.48, samples=20 00:25:35.871 lat (usec) : 1000=0.17% 00:25:35.871 lat (msec) : 2=0.17%, 4=0.83%, 10=3.46%, 20=9.37%, 50=27.98% 00:25:35.871 lat (msec) : 100=26.52%, 250=31.26%, 500=0.26% 00:25:35.871 cpu : usr=0.40%, sys=1.74%, ctx=1717, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=8357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job8: (groupid=0, jobs=1): err= 0: pid=3658005: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=670, BW=168MiB/s (176MB/s)(1692MiB/10088msec) 00:25:35.871 slat (usec): min=9, max=86573, avg=1049.82, stdev=3969.91 00:25:35.871 clat (msec): min=2, max=236, avg=94.31, stdev=49.93 00:25:35.871 lat (msec): min=2, max=244, avg=95.36, stdev=50.61 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 36], 00:25:35.871 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 95], 60.00th=[ 106], 00:25:35.871 | 70.00th=[ 118], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 184], 00:25:35.871 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 218], 99.95th=[ 220], 00:25:35.871 | 99.99th=[ 236] 00:25:35.871 bw ( KiB/s): min=85504, max=485888, per=8.76%, avg=171566.20, stdev=87318.96, samples=20 00:25:35.871 iops : min= 334, max= 1898, avg=670.10, stdev=341.12, samples=20 00:25:35.871 lat (msec) : 4=0.01%, 10=0.24%, 20=1.64%, 50=24.02%, 100=28.97% 00:25:35.871 lat (msec) : 250=45.12% 00:25:35.871 cpu : usr=0.47%, sys=2.33%, ctx=1585, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=6766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job9: (groupid=0, jobs=1): err= 0: pid=3658018: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=552, BW=138MiB/s (145MB/s)(1393MiB/10092msec) 00:25:35.871 slat (usec): min=9, max=89432, avg=1477.29, stdev=5071.07 00:25:35.871 clat (msec): min=3, max=257, avg=114.37, stdev=43.57 00:25:35.871 lat (msec): min=3, max=283, avg=115.85, stdev=44.36 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 31], 5.00th=[ 55], 10.00th=[ 69], 20.00th=[ 82], 00:25:35.871 | 30.00th=[ 89], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 113], 00:25:35.871 | 70.00th=[ 130], 80.00th=[ 161], 90.00th=[ 184], 95.00th=[ 197], 00:25:35.871 | 99.00th=[ 213], 99.50th=[ 222], 99.90th=[ 253], 99.95th=[ 257], 00:25:35.871 | 99.99th=[ 257] 00:25:35.871 bw ( KiB/s): min=79360, max=221696, per=7.20%, avg=140997.95, stdev=43496.83, samples=20 00:25:35.871 iops : min= 310, max= 866, avg=550.70, stdev=169.87, samples=20 00:25:35.871 lat (msec) : 4=0.05%, 10=0.16%, 20=0.05%, 50=3.48%, 100=42.74% 00:25:35.871 lat (msec) : 250=53.31%, 500=0.20% 00:25:35.871 cpu : usr=0.38%, sys=1.71%, ctx=1271, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=5571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 job10: (groupid=0, jobs=1): err= 0: pid=3658034: Sun Jul 14 18:57:22 2024 00:25:35.871 read: IOPS=705, BW=176MiB/s (185MB/s)(1781MiB/10096msec) 00:25:35.871 slat (usec): min=9, max=155739, avg=953.59, stdev=4489.55 00:25:35.871 clat (usec): min=1045, max=269988, avg=89680.84, stdev=60809.06 00:25:35.871 lat (usec): min=1063, max=324408, avg=90634.43, stdev=61489.15 00:25:35.871 clat percentiles (msec): 00:25:35.871 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 33], 00:25:35.871 | 30.00th=[ 44], 40.00th=[ 58], 50.00th=[ 72], 60.00th=[ 95], 00:25:35.871 | 70.00th=[ 126], 80.00th=[ 159], 90.00th=[ 180], 95.00th=[ 197], 00:25:35.871 | 99.00th=[ 230], 99.50th=[ 264], 99.90th=[ 271], 99.95th=[ 271], 00:25:35.871 | 99.99th=[ 271] 00:25:35.871 bw ( KiB/s): min=80384, max=433664, per=9.23%, avg=180755.30, stdev=93020.27, samples=20 00:25:35.871 iops : min= 314, max= 1694, avg=706.00, stdev=363.40, samples=20 00:25:35.871 lat (msec) : 2=0.28%, 4=0.98%, 10=2.86%, 20=3.92%, 50=26.57% 00:25:35.871 lat (msec) : 100=26.99%, 250=37.69%, 500=0.70% 00:25:35.871 cpu : usr=0.32%, sys=2.18%, ctx=1591, majf=0, minf=4097 00:25:35.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:35.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:35.871 issued rwts: total=7124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:35.871 00:25:35.871 Run status group 0 (all jobs): 00:25:35.871 READ: bw=1913MiB/s (2006MB/s), 138MiB/s-230MiB/s (145MB/s-241MB/s), io=18.9GiB (20.3GB), run=10018-10129msec 00:25:35.871 00:25:35.871 Disk stats (read/write): 00:25:35.871 nvme0n1: ios=13225/0, merge=0/0, ticks=1235149/0, in_queue=1235149, util=97.13% 00:25:35.871 nvme10n1: ios=14644/0, merge=0/0, ticks=1235869/0, in_queue=1235869, util=97.35% 00:25:35.871 nvme1n1: ios=14714/0, merge=0/0, ticks=1237142/0, in_queue=1237142, util=97.63% 00:25:35.871 nvme2n1: ios=11003/0, merge=0/0, ticks=1234780/0, in_queue=1234780, util=97.77% 00:25:35.871 nvme3n1: ios=11980/0, merge=0/0, ticks=1232910/0, in_queue=1232910, util=97.83% 00:25:35.871 nvme4n1: ios=18378/0, merge=0/0, ticks=1226446/0, in_queue=1226446, util=98.16% 00:25:35.871 nvme5n1: ios=13626/0, merge=0/0, ticks=1232814/0, in_queue=1232814, util=98.33% 00:25:35.871 nvme6n1: ios=16518/0, merge=0/0, ticks=1233358/0, in_queue=1233358, util=98.43% 00:25:35.871 nvme7n1: ios=13352/0, merge=0/0, ticks=1240050/0, in_queue=1240050, util=98.86% 00:25:35.871 nvme8n1: ios=10992/0, merge=0/0, ticks=1232165/0, in_queue=1232165, util=99.06% 00:25:35.871 nvme9n1: ios=14093/0, merge=0/0, ticks=1237199/0, in_queue=1237199, util=99.20% 00:25:35.871 18:57:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:35.871 [global] 00:25:35.872 thread=1 00:25:35.872 invalidate=1 00:25:35.872 rw=randwrite 00:25:35.872 time_based=1 00:25:35.872 runtime=10 00:25:35.872 ioengine=libaio 00:25:35.872 direct=1 00:25:35.872 bs=262144 00:25:35.872 iodepth=64 00:25:35.872 norandommap=1 00:25:35.872 numjobs=1 00:25:35.872 00:25:35.872 [job0] 00:25:35.872 filename=/dev/nvme0n1 00:25:35.872 [job1] 00:25:35.872 filename=/dev/nvme10n1 00:25:35.872 [job2] 00:25:35.872 filename=/dev/nvme1n1 00:25:35.872 [job3] 00:25:35.872 filename=/dev/nvme2n1 00:25:35.872 [job4] 00:25:35.872 filename=/dev/nvme3n1 00:25:35.872 [job5] 00:25:35.872 filename=/dev/nvme4n1 00:25:35.872 [job6] 00:25:35.872 filename=/dev/nvme5n1 00:25:35.872 [job7] 00:25:35.872 filename=/dev/nvme6n1 00:25:35.872 [job8] 00:25:35.872 filename=/dev/nvme7n1 00:25:35.872 [job9] 00:25:35.872 filename=/dev/nvme8n1 00:25:35.872 [job10] 00:25:35.872 filename=/dev/nvme9n1 00:25:35.872 Could not set queue depth (nvme0n1) 00:25:35.872 Could not set queue depth (nvme10n1) 00:25:35.872 Could not set queue depth (nvme1n1) 00:25:35.872 Could not set queue depth (nvme2n1) 00:25:35.872 Could not set queue depth (nvme3n1) 00:25:35.872 Could not set queue depth (nvme4n1) 00:25:35.872 Could not set queue depth (nvme5n1) 00:25:35.872 Could not set queue depth (nvme6n1) 00:25:35.872 Could not set queue depth (nvme7n1) 00:25:35.872 Could not set queue depth (nvme8n1) 00:25:35.872 Could not set queue depth (nvme9n1) 00:25:35.872 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:35.872 fio-3.35 00:25:35.872 Starting 11 threads 00:25:45.846 00:25:45.846 job0: (groupid=0, jobs=1): err= 0: pid=3659152: Sun Jul 14 18:57:33 2024 00:25:45.846 write: IOPS=477, BW=119MiB/s (125MB/s)(1219MiB/10208msec); 0 zone resets 00:25:45.846 slat (usec): min=19, max=137432, avg=1796.63, stdev=4800.32 00:25:45.846 clat (usec): min=1675, max=472780, avg=132068.39, stdev=70956.29 00:25:45.846 lat (usec): min=1716, max=472812, avg=133865.02, stdev=71805.94 00:25:45.846 clat percentiles (msec): 00:25:45.846 | 1.00th=[ 20], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 55], 00:25:45.846 | 30.00th=[ 88], 40.00th=[ 114], 50.00th=[ 136], 60.00th=[ 148], 00:25:45.846 | 70.00th=[ 163], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 255], 00:25:45.846 | 99.00th=[ 351], 99.50th=[ 397], 99.90th=[ 460], 99.95th=[ 460], 00:25:45.846 | 99.99th=[ 472] 00:25:45.846 bw ( KiB/s): min=71168, max=347136, per=8.78%, avg=123240.10, stdev=65440.97, samples=20 00:25:45.846 iops : min= 278, max= 1356, avg=481.35, stdev=255.64, samples=20 00:25:45.846 lat (msec) : 2=0.02%, 4=0.39%, 10=0.33%, 20=0.27%, 50=17.43% 00:25:45.846 lat (msec) : 100=16.08%, 250=60.02%, 500=5.47% 00:25:45.846 cpu : usr=1.53%, sys=1.38%, ctx=1719, majf=0, minf=1 00:25:45.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:45.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.846 issued rwts: total=0,4877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.846 job1: (groupid=0, jobs=1): err= 0: pid=3659164: Sun Jul 14 18:57:33 2024 00:25:45.846 write: IOPS=478, BW=120MiB/s (126MB/s)(1213MiB/10136msec); 0 zone resets 00:25:45.846 slat (usec): min=16, max=60438, avg=1476.43, stdev=4212.86 00:25:45.846 clat (usec): min=790, max=402159, avg=132129.28, stdev=79675.94 00:25:45.846 lat (usec): min=856, max=407130, avg=133605.71, stdev=80675.90 00:25:45.846 clat percentiles (msec): 00:25:45.846 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 50], 00:25:45.846 | 30.00th=[ 93], 40.00th=[ 125], 50.00th=[ 136], 60.00th=[ 148], 00:25:45.846 | 70.00th=[ 163], 80.00th=[ 188], 90.00th=[ 224], 95.00th=[ 268], 00:25:45.846 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:25:45.846 | 99.99th=[ 401] 00:25:45.846 bw ( KiB/s): min=40960, max=272896, per=8.73%, avg=122613.95, stdev=52189.82, samples=20 00:25:45.846 iops : min= 160, max= 1066, avg=478.95, stdev=203.87, samples=20 00:25:45.846 lat (usec) : 1000=0.16% 00:25:45.846 lat (msec) : 2=0.62%, 4=1.32%, 10=3.32%, 20=4.31%, 50=10.82% 00:25:45.846 lat (msec) : 100=10.53%, 250=62.81%, 500=6.12% 00:25:45.846 cpu : usr=1.58%, sys=1.57%, ctx=2774, majf=0, minf=1 00:25:45.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:45.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.846 issued rwts: total=0,4853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.846 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.846 job2: (groupid=0, jobs=1): err= 0: pid=3659165: Sun Jul 14 18:57:33 2024 00:25:45.846 write: IOPS=443, BW=111MiB/s (116MB/s)(1125MiB/10131msec); 0 zone resets 00:25:45.847 slat (usec): min=17, max=80177, avg=1881.73, stdev=4474.06 00:25:45.847 clat (usec): min=764, max=393282, avg=142152.64, stdev=68222.31 00:25:45.847 lat (usec): min=790, max=410992, avg=144034.37, stdev=68883.65 00:25:45.847 clat percentiles (usec): 00:25:45.847 | 1.00th=[ 1795], 5.00th=[ 34341], 10.00th=[ 72877], 20.00th=[ 87557], 00:25:45.847 | 30.00th=[110625], 40.00th=[127402], 50.00th=[137364], 60.00th=[147850], 00:25:45.847 | 70.00th=[166724], 80.00th=[187696], 90.00th=[214959], 95.00th=[254804], 00:25:45.847 | 99.00th=[371196], 99.50th=[379585], 99.90th=[392168], 99.95th=[392168], 00:25:45.847 | 99.99th=[392168] 00:25:45.847 bw ( KiB/s): min=65536, max=206435, per=8.09%, avg=113523.35, stdev=37336.92, samples=20 00:25:45.847 iops : min= 256, max= 806, avg=443.40, stdev=145.83, samples=20 00:25:45.847 lat (usec) : 1000=0.16% 00:25:45.847 lat (msec) : 2=1.09%, 4=1.11%, 10=0.84%, 20=1.13%, 50=2.27% 00:25:45.847 lat (msec) : 100=20.10%, 250=68.12%, 500=5.18% 00:25:45.847 cpu : usr=1.36%, sys=1.28%, ctx=1774, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,4498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job3: (groupid=0, jobs=1): err= 0: pid=3659166: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=609, BW=152MiB/s (160MB/s)(1556MiB/10214msec); 0 zone resets 00:25:45.847 slat (usec): min=17, max=137847, avg=1212.25, stdev=4315.35 00:25:45.847 clat (usec): min=930, max=439266, avg=103730.88, stdev=64142.78 00:25:45.847 lat (usec): min=967, max=439295, avg=104943.14, stdev=64825.98 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 25], 20.00th=[ 46], 00:25:45.847 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 112], 00:25:45.847 | 70.00th=[ 140], 80.00th=[ 161], 90.00th=[ 190], 95.00th=[ 213], 00:25:45.847 | 99.00th=[ 284], 99.50th=[ 330], 99.90th=[ 414], 99.95th=[ 426], 00:25:45.847 | 99.99th=[ 439] 00:25:45.847 bw ( KiB/s): min=72192, max=314880, per=11.23%, avg=157705.70, stdev=65720.06, samples=20 00:25:45.847 iops : min= 282, max= 1230, avg=616.00, stdev=256.72, samples=20 00:25:45.847 lat (usec) : 1000=0.06% 00:25:45.847 lat (msec) : 2=0.29%, 4=1.11%, 10=2.33%, 20=4.55%, 50=14.88% 00:25:45.847 lat (msec) : 100=34.21%, 250=41.27%, 500=1.30% 00:25:45.847 cpu : usr=1.81%, sys=1.70%, ctx=2993, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,6223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job4: (groupid=0, jobs=1): err= 0: pid=3659167: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=524, BW=131MiB/s (137MB/s)(1338MiB/10212msec); 0 zone resets 00:25:45.847 slat (usec): min=20, max=49515, avg=1138.16, stdev=3325.53 00:25:45.847 clat (msec): min=2, max=407, avg=120.87, stdev=66.56 00:25:45.847 lat (msec): min=2, max=407, avg=122.01, stdev=67.24 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 73], 00:25:45.847 | 30.00th=[ 82], 40.00th=[ 89], 50.00th=[ 109], 60.00th=[ 128], 00:25:45.847 | 70.00th=[ 144], 80.00th=[ 171], 90.00th=[ 205], 95.00th=[ 249], 00:25:45.847 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 393], 99.95th=[ 401], 00:25:45.847 | 99.99th=[ 409] 00:25:45.847 bw ( KiB/s): min=59904, max=201728, per=9.64%, avg=135395.00, stdev=42699.75, samples=20 00:25:45.847 iops : min= 234, max= 788, avg=528.85, stdev=166.75, samples=20 00:25:45.847 lat (msec) : 4=0.13%, 10=0.90%, 20=1.48%, 50=8.74%, 100=34.32% 00:25:45.847 lat (msec) : 250=49.50%, 500=4.93% 00:25:45.847 cpu : usr=1.51%, sys=1.76%, ctx=3060, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,5352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job5: (groupid=0, jobs=1): err= 0: pid=3659168: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=438, BW=110MiB/s (115MB/s)(1111MiB/10130msec); 0 zone resets 00:25:45.847 slat (usec): min=21, max=104991, avg=1703.55, stdev=4530.43 00:25:45.847 clat (msec): min=3, max=406, avg=144.16, stdev=68.76 00:25:45.847 lat (msec): min=4, max=416, avg=145.87, stdev=69.68 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 66], 20.00th=[ 84], 00:25:45.847 | 30.00th=[ 105], 40.00th=[ 129], 50.00th=[ 144], 60.00th=[ 155], 00:25:45.847 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 230], 95.00th=[ 288], 00:25:45.847 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 397], 99.95th=[ 401], 00:25:45.847 | 99.99th=[ 405] 00:25:45.847 bw ( KiB/s): min=57344, max=177152, per=7.98%, avg=112092.15, stdev=33202.65, samples=20 00:25:45.847 iops : min= 224, max= 692, avg=437.85, stdev=129.70, samples=20 00:25:45.847 lat (msec) : 4=0.02%, 10=0.77%, 20=0.81%, 50=4.93%, 100=22.20% 00:25:45.847 lat (msec) : 250=62.97%, 500=8.31% 00:25:45.847 cpu : usr=1.44%, sys=1.57%, ctx=2270, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,4442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job6: (groupid=0, jobs=1): err= 0: pid=3659169: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=417, BW=104MiB/s (109MB/s)(1065MiB/10208msec); 0 zone resets 00:25:45.847 slat (usec): min=23, max=51166, avg=1974.45, stdev=4514.66 00:25:45.847 clat (usec): min=1741, max=440549, avg=151290.68, stdev=72070.90 00:25:45.847 lat (usec): min=1778, max=440592, avg=153265.13, stdev=73036.12 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 75], 20.00th=[ 84], 00:25:45.847 | 30.00th=[ 110], 40.00th=[ 128], 50.00th=[ 144], 60.00th=[ 163], 00:25:45.847 | 70.00th=[ 184], 80.00th=[ 209], 90.00th=[ 239], 95.00th=[ 292], 00:25:45.847 | 99.00th=[ 347], 99.50th=[ 376], 99.90th=[ 426], 99.95th=[ 430], 00:25:45.847 | 99.99th=[ 443] 00:25:45.847 bw ( KiB/s): min=53248, max=209408, per=7.65%, avg=107435.85, stdev=41068.31, samples=20 00:25:45.847 iops : min= 208, max= 818, avg=419.60, stdev=160.35, samples=20 00:25:45.847 lat (msec) : 2=0.02%, 4=0.59%, 10=0.19%, 20=0.68%, 50=3.94% 00:25:45.847 lat (msec) : 100=21.64%, 250=64.60%, 500=8.33% 00:25:45.847 cpu : usr=1.21%, sys=1.49%, ctx=1765, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,4260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job7: (groupid=0, jobs=1): err= 0: pid=3659170: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=414, BW=104MiB/s (109MB/s)(1050MiB/10133msec); 0 zone resets 00:25:45.847 slat (usec): min=17, max=58209, avg=1654.56, stdev=4467.16 00:25:45.847 clat (msec): min=2, max=352, avg=152.68, stdev=67.59 00:25:45.847 lat (msec): min=2, max=352, avg=154.34, stdev=68.68 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 22], 5.00th=[ 48], 10.00th=[ 69], 20.00th=[ 90], 00:25:45.847 | 30.00th=[ 121], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 161], 00:25:45.847 | 70.00th=[ 178], 80.00th=[ 207], 90.00th=[ 241], 95.00th=[ 279], 00:25:45.847 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 355], 00:25:45.847 | 99.99th=[ 355] 00:25:45.847 bw ( KiB/s): min=49152, max=161280, per=7.54%, avg=105908.15, stdev=29696.55, samples=20 00:25:45.847 iops : min= 192, max= 630, avg=413.65, stdev=115.98, samples=20 00:25:45.847 lat (msec) : 4=0.07%, 10=0.33%, 20=0.45%, 50=4.43%, 100=17.81% 00:25:45.847 lat (msec) : 250=68.26%, 500=8.64% 00:25:45.847 cpu : usr=1.31%, sys=1.44%, ctx=2357, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,4200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job8: (groupid=0, jobs=1): err= 0: pid=3659171: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=623, BW=156MiB/s (164MB/s)(1591MiB/10203msec); 0 zone resets 00:25:45.847 slat (usec): min=15, max=41523, avg=1263.16, stdev=2972.38 00:25:45.847 clat (usec): min=750, max=436730, avg=101282.81, stdev=64022.65 00:25:45.847 lat (usec): min=786, max=436761, avg=102545.97, stdev=64580.80 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 4], 5.00th=[ 20], 10.00th=[ 41], 20.00th=[ 51], 00:25:45.847 | 30.00th=[ 62], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 94], 00:25:45.847 | 70.00th=[ 128], 80.00th=[ 157], 90.00th=[ 184], 95.00th=[ 224], 00:25:45.847 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 426], 99.95th=[ 426], 00:25:45.847 | 99.99th=[ 439] 00:25:45.847 bw ( KiB/s): min=81920, max=317440, per=11.49%, avg=161295.35, stdev=65672.83, samples=20 00:25:45.847 iops : min= 320, max= 1240, avg=630.05, stdev=256.54, samples=20 00:25:45.847 lat (usec) : 1000=0.14% 00:25:45.847 lat (msec) : 2=0.35%, 4=0.66%, 10=2.06%, 20=1.89%, 50=14.20% 00:25:45.847 lat (msec) : 100=44.01%, 250=34.41%, 500=2.28% 00:25:45.847 cpu : usr=1.75%, sys=1.94%, ctx=2767, majf=0, minf=1 00:25:45.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.847 issued rwts: total=0,6364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.847 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.847 job9: (groupid=0, jobs=1): err= 0: pid=3659172: Sun Jul 14 18:57:33 2024 00:25:45.847 write: IOPS=546, BW=137MiB/s (143MB/s)(1396MiB/10216msec); 0 zone resets 00:25:45.847 slat (usec): min=22, max=61998, avg=1155.02, stdev=3728.12 00:25:45.847 clat (usec): min=830, max=443636, avg=115827.21, stdev=79639.36 00:25:45.847 lat (usec): min=903, max=443667, avg=116982.22, stdev=80559.39 00:25:45.847 clat percentiles (msec): 00:25:45.847 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 42], 00:25:45.847 | 30.00th=[ 65], 40.00th=[ 79], 50.00th=[ 108], 60.00th=[ 138], 00:25:45.847 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 211], 95.00th=[ 257], 00:25:45.847 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 430], 99.95th=[ 430], 00:25:45.847 | 99.99th=[ 443] 00:25:45.847 bw ( KiB/s): min=55808, max=272384, per=10.06%, avg=141291.95, stdev=52921.85, samples=20 00:25:45.847 iops : min= 218, max= 1064, avg=551.90, stdev=206.74, samples=20 00:25:45.847 lat (usec) : 1000=0.09% 00:25:45.847 lat (msec) : 2=0.52%, 4=1.07%, 10=3.12%, 20=5.02%, 50=14.47% 00:25:45.848 lat (msec) : 100=23.55%, 250=46.80%, 500=5.36% 00:25:45.848 cpu : usr=1.69%, sys=2.23%, ctx=3537, majf=0, minf=1 00:25:45.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:45.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.848 issued rwts: total=0,5583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.848 job10: (groupid=0, jobs=1): err= 0: pid=3659173: Sun Jul 14 18:57:33 2024 00:25:45.848 write: IOPS=525, BW=131MiB/s (138MB/s)(1343MiB/10214msec); 0 zone resets 00:25:45.848 slat (usec): min=19, max=108917, avg=1084.44, stdev=4212.97 00:25:45.848 clat (usec): min=786, max=476613, avg=120571.68, stdev=83849.99 00:25:45.848 lat (usec): min=823, max=476643, avg=121656.13, stdev=84833.78 00:25:45.848 clat percentiles (msec): 00:25:45.848 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 24], 20.00th=[ 50], 00:25:45.848 | 30.00th=[ 74], 40.00th=[ 87], 50.00th=[ 101], 60.00th=[ 127], 00:25:45.848 | 70.00th=[ 155], 80.00th=[ 184], 90.00th=[ 228], 95.00th=[ 266], 00:25:45.848 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 464], 99.95th=[ 464], 00:25:45.848 | 99.99th=[ 477] 00:25:45.848 bw ( KiB/s): min=49664, max=210944, per=9.68%, avg=135823.15, stdev=51578.04, samples=20 00:25:45.848 iops : min= 194, max= 824, avg=530.50, stdev=201.48, samples=20 00:25:45.848 lat (usec) : 1000=0.09% 00:25:45.848 lat (msec) : 2=0.17%, 4=0.61%, 10=2.79%, 20=4.39%, 50=12.03% 00:25:45.848 lat (msec) : 100=29.81%, 250=43.11%, 500=6.98% 00:25:45.848 cpu : usr=1.48%, sys=1.81%, ctx=3714, majf=0, minf=1 00:25:45.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:45.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:45.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:45.848 issued rwts: total=0,5370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:45.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:45.848 00:25:45.848 Run status group 0 (all jobs): 00:25:45.848 WRITE: bw=1371MiB/s (1438MB/s), 104MiB/s-156MiB/s (109MB/s-164MB/s), io=13.7GiB (14.7GB), run=10130-10216msec 00:25:45.848 00:25:45.848 Disk stats (read/write): 00:25:45.848 nvme0n1: ios=49/9730, merge=0/0, ticks=2633/1221412, in_queue=1224045, util=99.89% 00:25:45.848 nvme10n1: ios=48/9362, merge=0/0, ticks=2603/1218161, in_queue=1220764, util=100.00% 00:25:45.848 nvme1n1: ios=43/8824, merge=0/0, ticks=1682/1212192, in_queue=1213874, util=100.00% 00:25:45.848 nvme2n1: ios=48/12422, merge=0/0, ticks=3360/1203875, in_queue=1207235, util=100.00% 00:25:45.848 nvme3n1: ios=45/10681, merge=0/0, ticks=1902/1246415, in_queue=1248317, util=100.00% 00:25:45.848 nvme4n1: ios=24/8705, merge=0/0, ticks=1370/1216062, in_queue=1217432, util=100.00% 00:25:45.848 nvme5n1: ios=0/8501, merge=0/0, ticks=0/1239048, in_queue=1239048, util=98.37% 00:25:45.848 nvme6n1: ios=0/8226, merge=0/0, ticks=0/1219033, in_queue=1219033, util=98.43% 00:25:45.848 nvme7n1: ios=0/12713, merge=0/0, ticks=0/1240228, in_queue=1240228, util=98.85% 00:25:45.848 nvme8n1: ios=38/11122, merge=0/0, ticks=783/1245738, in_queue=1246521, util=100.00% 00:25:45.848 nvme9n1: ios=0/10715, merge=0/0, ticks=0/1250747, in_queue=1250747, util=99.13% 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:45.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:45.848 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.848 18:57:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:46.106 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.106 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:46.363 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.363 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.364 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.364 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.364 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:46.621 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:46.621 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.622 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:46.880 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:46.880 18:57:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:46.880 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:46.880 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:47.139 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.139 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:47.398 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.398 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:47.656 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:47.656 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:47.657 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.657 rmmod nvme_tcp 00:25:47.657 rmmod nvme_fabrics 00:25:47.657 rmmod nvme_keyring 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3653133 ']' 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3653133 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 3653133 ']' 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 3653133 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:47.657 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3653133 00:25:47.915 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:47.915 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:47.915 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3653133' 00:25:47.915 killing process with pid 3653133 00:25:47.915 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 3653133 00:25:47.915 18:57:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 3653133 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.174 18:57:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.708 18:57:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.708 00:25:50.708 real 1m0.230s 00:25:50.708 user 3m23.233s 00:25:50.708 sys 0m24.356s 00:25:50.708 18:57:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:50.708 18:57:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.708 ************************************ 00:25:50.708 END TEST nvmf_multiconnection 00:25:50.708 ************************************ 00:25:50.708 18:57:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:50.708 18:57:38 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:50.708 18:57:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:50.708 18:57:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.708 18:57:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.708 ************************************ 00:25:50.708 START TEST nvmf_initiator_timeout 00:25:50.708 ************************************ 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:50.708 * Looking for test storage... 00:25:50.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.708 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.709 18:57:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:52.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:52.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:52.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:52.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:52.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:25:52.612 00:25:52.612 --- 10.0.0.2 ping statistics --- 00:25:52.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.612 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:25:52.612 00:25:52.612 --- 10.0.0.1 ping statistics --- 00:25:52.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.612 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3662508 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3662508 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3662508 ']' 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.612 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:52.613 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.613 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.613 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.613 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.613 [2024-07-14 18:57:40.639970] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:52.613 [2024-07-14 18:57:40.640062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.613 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.613 [2024-07-14 18:57:40.710322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:52.613 [2024-07-14 18:57:40.800664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.613 [2024-07-14 18:57:40.800725] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.613 [2024-07-14 18:57:40.800750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.613 [2024-07-14 18:57:40.800763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.613 [2024-07-14 18:57:40.800775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.613 [2024-07-14 18:57:40.800868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.613 [2024-07-14 18:57:40.800927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.613 [2024-07-14 18:57:40.801044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.613 [2024-07-14 18:57:40.801047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 Malloc0 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 Delay0 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 [2024-07-14 18:57:40.990691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.872 [2024-07-14 18:57:41.019000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.872 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:53.805 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:53.805 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.805 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.805 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.806 18:57:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3662930 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:55.721 18:57:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:55.721 [global] 00:25:55.721 thread=1 00:25:55.721 invalidate=1 00:25:55.721 rw=write 00:25:55.721 time_based=1 00:25:55.721 runtime=60 00:25:55.721 ioengine=libaio 00:25:55.721 direct=1 00:25:55.721 bs=4096 00:25:55.721 iodepth=1 00:25:55.721 norandommap=0 00:25:55.721 numjobs=1 00:25:55.721 00:25:55.721 verify_dump=1 00:25:55.721 verify_backlog=512 00:25:55.721 verify_state_save=0 00:25:55.721 do_verify=1 00:25:55.721 verify=crc32c-intel 00:25:55.721 [job0] 00:25:55.721 filename=/dev/nvme0n1 00:25:55.721 Could not set queue depth (nvme0n1) 00:25:55.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:55.721 fio-3.35 00:25:55.721 Starting 1 thread 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.034 true 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.034 true 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.034 true 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.034 true 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.034 18:57:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.571 true 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.571 true 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.571 true 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:01.571 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.572 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:01.572 true 00:26:01.572 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.572 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:01.572 18:57:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3662930 00:26:57.799 00:26:57.799 job0: (groupid=0, jobs=1): err= 0: pid=3663004: Sun Jul 14 18:58:44 2024 00:26:57.799 read: IOPS=47, BW=191KiB/s (196kB/s)(11.2MiB/60027msec) 00:26:57.799 slat (usec): min=6, max=15642, avg=23.01, stdev=306.01 00:26:57.799 clat (usec): min=224, max=40821k, avg=20631.26, stdev=761862.92 00:26:57.799 lat (usec): min=232, max=40821k, avg=20654.27, stdev=761863.20 00:26:57.799 clat percentiles (usec): 00:26:57.799 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 00:26:57.799 | 20.00th=[ 277], 30.00th=[ 285], 40.00th=[ 297], 00:26:57.799 | 50.00th=[ 306], 60.00th=[ 338], 70.00th=[ 371], 00:26:57.799 | 80.00th=[ 498], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:57.799 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:57.799 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:57.799 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60027msec); 0 zone resets 00:26:57.799 slat (nsec): min=6726, max=62944, avg=12833.73, stdev=6625.64 00:26:57.799 clat (usec): min=174, max=457, avg=216.02, stdev=30.63 00:26:57.799 lat (usec): min=182, max=498, avg=228.85, stdev=35.95 00:26:57.799 clat percentiles (usec): 00:26:57.800 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:26:57.800 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:26:57.800 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 258], 95.00th=[ 281], 00:26:57.800 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 416], 99.95th=[ 424], 00:26:57.800 | 99.99th=[ 457] 00:26:57.800 bw ( KiB/s): min= 3224, max= 7200, per=100.00%, avg=4915.20, stdev=1481.91, samples=5 00:26:57.800 iops : min= 806, max= 1800, avg=1228.80, stdev=370.48, samples=5 00:26:57.800 lat (usec) : 250=46.61%, 500=43.90%, 750=2.29% 00:26:57.800 lat (msec) : 50=7.18%, >=2000=0.02% 00:26:57.800 cpu : usr=0.10%, sys=0.19%, ctx=5945, majf=0, minf=2 00:26:57.800 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:57.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.800 issued rwts: total=2871,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.800 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:57.800 00:26:57.800 Run status group 0 (all jobs): 00:26:57.800 READ: bw=191KiB/s (196kB/s), 191KiB/s-191KiB/s (196kB/s-196kB/s), io=11.2MiB (11.8MB), run=60027-60027msec 00:26:57.800 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60027-60027msec 00:26:57.800 00:26:57.800 Disk stats (read/write): 00:26:57.800 nvme0n1: ios=2966/3072, merge=0/0, ticks=19191/628, in_queue=19819, util=99.63% 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:57.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:57.800 nvmf hotplug test: fio successful as expected 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:57.800 rmmod nvme_tcp 00:26:57.800 rmmod nvme_fabrics 00:26:57.800 rmmod nvme_keyring 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3662508 ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3662508 ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662508' 00:26:57.800 killing process with pid 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3662508 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.800 18:58:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.369 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.369 00:26:58.369 real 1m8.107s 00:26:58.369 user 4m11.172s 00:26:58.369 sys 0m6.240s 00:26:58.369 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:58.369 18:58:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.369 ************************************ 00:26:58.369 END TEST nvmf_initiator_timeout 00:26:58.369 ************************************ 00:26:58.627 18:58:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:58.627 18:58:46 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:58.627 18:58:46 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:58.627 18:58:46 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:58.627 18:58:46 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.627 18:58:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:00.531 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:00.531 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:00.531 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:00.531 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:00.531 18:58:48 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:00.531 18:58:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:00.531 18:58:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:00.531 18:58:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.531 ************************************ 00:27:00.531 START TEST nvmf_perf_adq 00:27:00.531 ************************************ 00:27:00.531 18:58:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:00.531 * Looking for test storage... 00:27:00.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.531 18:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:00.532 18:58:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:02.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:02.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:02.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:02.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:02.435 18:58:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:03.004 18:58:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:05.538 18:58:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:10.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:10.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:10.837 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:10.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:10.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:10.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:27:10.838 00:27:10.838 --- 10.0.0.2 ping statistics --- 00:27:10.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.838 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:27:10.838 00:27:10.838 --- 10.0.0.1 ping statistics --- 00:27:10.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.838 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3674505 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3674505 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3674505 ']' 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 [2024-07-14 18:58:58.366768] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:10.838 [2024-07-14 18:58:58.366842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.838 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.838 [2024-07-14 18:58:58.432280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.838 [2024-07-14 18:58:58.521171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.838 [2024-07-14 18:58:58.521231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.838 [2024-07-14 18:58:58.521244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.838 [2024-07-14 18:58:58.521255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.838 [2024-07-14 18:58:58.521265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.838 [2024-07-14 18:58:58.521349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.838 [2024-07-14 18:58:58.521413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.838 [2024-07-14 18:58:58.521479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.838 [2024-07-14 18:58:58.521485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 [2024-07-14 18:58:58.759831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.838 Malloc1 00:27:10.838 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.839 [2024-07-14 18:58:58.813243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3674536 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:10.839 18:58:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:10.839 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:12.790 "tick_rate": 2700000000, 00:27:12.790 "poll_groups": [ 00:27:12.790 { 00:27:12.790 "name": "nvmf_tgt_poll_group_000", 00:27:12.790 "admin_qpairs": 1, 00:27:12.790 "io_qpairs": 1, 00:27:12.790 "current_admin_qpairs": 1, 00:27:12.790 "current_io_qpairs": 1, 00:27:12.790 "pending_bdev_io": 0, 00:27:12.790 "completed_nvme_io": 18803, 00:27:12.790 "transports": [ 00:27:12.790 { 00:27:12.790 "trtype": "TCP" 00:27:12.790 } 00:27:12.790 ] 00:27:12.790 }, 00:27:12.790 { 00:27:12.790 "name": "nvmf_tgt_poll_group_001", 00:27:12.790 "admin_qpairs": 0, 00:27:12.790 "io_qpairs": 1, 00:27:12.790 "current_admin_qpairs": 0, 00:27:12.790 "current_io_qpairs": 1, 00:27:12.790 "pending_bdev_io": 0, 00:27:12.790 "completed_nvme_io": 19205, 00:27:12.790 "transports": [ 00:27:12.790 { 00:27:12.790 "trtype": "TCP" 00:27:12.790 } 00:27:12.790 ] 00:27:12.790 }, 00:27:12.790 { 00:27:12.790 "name": "nvmf_tgt_poll_group_002", 00:27:12.790 "admin_qpairs": 0, 00:27:12.790 "io_qpairs": 1, 00:27:12.790 "current_admin_qpairs": 0, 00:27:12.790 "current_io_qpairs": 1, 00:27:12.790 "pending_bdev_io": 0, 00:27:12.790 "completed_nvme_io": 19348, 00:27:12.790 "transports": [ 00:27:12.790 { 00:27:12.790 "trtype": "TCP" 00:27:12.790 } 00:27:12.790 ] 00:27:12.790 }, 00:27:12.790 { 00:27:12.790 "name": "nvmf_tgt_poll_group_003", 00:27:12.790 "admin_qpairs": 0, 00:27:12.790 "io_qpairs": 1, 00:27:12.790 "current_admin_qpairs": 0, 00:27:12.790 "current_io_qpairs": 1, 00:27:12.790 "pending_bdev_io": 0, 00:27:12.790 "completed_nvme_io": 19090, 00:27:12.790 "transports": [ 00:27:12.790 { 00:27:12.790 "trtype": "TCP" 00:27:12.790 } 00:27:12.790 ] 00:27:12.790 } 00:27:12.790 ] 00:27:12.790 }' 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:12.790 18:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3674536 00:27:20.903 Initializing NVMe Controllers 00:27:20.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:20.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:20.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:20.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:20.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:20.903 Initialization complete. Launching workers. 00:27:20.903 ======================================================== 00:27:20.903 Latency(us) 00:27:20.903 Device Information : IOPS MiB/s Average min max 00:27:20.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10774.70 42.09 5940.69 3377.15 8725.87 00:27:20.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10824.60 42.28 5913.64 2631.42 9744.53 00:27:20.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10906.80 42.60 5869.26 2298.88 9663.40 00:27:20.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10534.10 41.15 6076.77 2831.52 9064.86 00:27:20.903 ======================================================== 00:27:20.903 Total : 43040.18 168.13 5949.09 2298.88 9744.53 00:27:20.903 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:20.903 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:20.903 rmmod nvme_tcp 00:27:20.903 rmmod nvme_fabrics 00:27:20.903 rmmod nvme_keyring 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3674505 ']' 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3674505 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3674505 ']' 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3674505 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3674505 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3674505' 00:27:21.162 killing process with pid 3674505 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3674505 00:27:21.162 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3674505 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.422 18:59:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.327 18:59:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:23.327 18:59:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:23.327 18:59:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:23.895 18:59:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:25.799 18:59:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:31.066 18:59:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:31.066 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.066 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.066 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.066 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:31.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:31.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:31.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:31.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:31.067 18:59:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:31.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:31.067 00:27:31.067 --- 10.0.0.2 ping statistics --- 00:27:31.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.067 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:27:31.067 00:27:31.067 --- 10.0.0.1 ping statistics --- 00:27:31.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.067 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:31.067 net.core.busy_poll = 1 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:31.067 net.core.busy_read = 1 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:31.067 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3677164 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3677164 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3677164 ']' 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.068 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.326 [2024-07-14 18:59:19.335250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:31.326 [2024-07-14 18:59:19.335334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.326 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.326 [2024-07-14 18:59:19.405591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.326 [2024-07-14 18:59:19.494436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.326 [2024-07-14 18:59:19.494492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.326 [2024-07-14 18:59:19.494505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.326 [2024-07-14 18:59:19.494515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.326 [2024-07-14 18:59:19.494524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.326 [2024-07-14 18:59:19.494608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.326 [2024-07-14 18:59:19.494673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.326 [2024-07-14 18:59:19.494741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.326 [2024-07-14 18:59:19.494743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.326 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.326 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:31.326 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.326 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.326 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 [2024-07-14 18:59:19.727830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 Malloc1 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:31.585 [2024-07-14 18:59:19.781071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3677198 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:31.585 18:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:31.843 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:33.745 "tick_rate": 2700000000, 00:27:33.745 "poll_groups": [ 00:27:33.745 { 00:27:33.745 "name": "nvmf_tgt_poll_group_000", 00:27:33.745 "admin_qpairs": 1, 00:27:33.745 "io_qpairs": 2, 00:27:33.745 "current_admin_qpairs": 1, 00:27:33.745 "current_io_qpairs": 2, 00:27:33.745 "pending_bdev_io": 0, 00:27:33.745 "completed_nvme_io": 26999, 00:27:33.745 "transports": [ 00:27:33.745 { 00:27:33.745 "trtype": "TCP" 00:27:33.745 } 00:27:33.745 ] 00:27:33.745 }, 00:27:33.745 { 00:27:33.745 "name": "nvmf_tgt_poll_group_001", 00:27:33.745 "admin_qpairs": 0, 00:27:33.745 "io_qpairs": 2, 00:27:33.745 "current_admin_qpairs": 0, 00:27:33.745 "current_io_qpairs": 2, 00:27:33.745 "pending_bdev_io": 0, 00:27:33.745 "completed_nvme_io": 26057, 00:27:33.745 "transports": [ 00:27:33.745 { 00:27:33.745 "trtype": "TCP" 00:27:33.745 } 00:27:33.745 ] 00:27:33.745 }, 00:27:33.745 { 00:27:33.745 "name": "nvmf_tgt_poll_group_002", 00:27:33.745 "admin_qpairs": 0, 00:27:33.745 "io_qpairs": 0, 00:27:33.745 "current_admin_qpairs": 0, 00:27:33.745 "current_io_qpairs": 0, 00:27:33.745 "pending_bdev_io": 0, 00:27:33.745 "completed_nvme_io": 0, 00:27:33.745 "transports": [ 00:27:33.745 { 00:27:33.745 "trtype": "TCP" 00:27:33.745 } 00:27:33.745 ] 00:27:33.745 }, 00:27:33.745 { 00:27:33.745 "name": "nvmf_tgt_poll_group_003", 00:27:33.745 "admin_qpairs": 0, 00:27:33.745 "io_qpairs": 0, 00:27:33.745 "current_admin_qpairs": 0, 00:27:33.745 "current_io_qpairs": 0, 00:27:33.745 "pending_bdev_io": 0, 00:27:33.745 "completed_nvme_io": 0, 00:27:33.745 "transports": [ 00:27:33.745 { 00:27:33.745 "trtype": "TCP" 00:27:33.745 } 00:27:33.745 ] 00:27:33.745 } 00:27:33.745 ] 00:27:33.745 }' 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:33.745 18:59:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3677198 00:27:41.891 Initializing NVMe Controllers 00:27:41.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:41.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:41.891 Initialization complete. Launching workers. 00:27:41.891 ======================================================== 00:27:41.891 Latency(us) 00:27:41.891 Device Information : IOPS MiB/s Average min max 00:27:41.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6768.80 26.44 9491.72 1704.45 54330.14 00:27:41.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7193.70 28.10 8897.03 1706.80 53564.80 00:27:41.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7123.90 27.83 8984.08 1711.58 52733.24 00:27:41.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6974.30 27.24 9177.92 1814.87 56223.43 00:27:41.891 ======================================================== 00:27:41.891 Total : 28060.69 109.61 9132.39 1704.45 56223.43 00:27:41.891 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.891 18:59:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.891 rmmod nvme_tcp 00:27:41.891 rmmod nvme_fabrics 00:27:41.891 rmmod nvme_keyring 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3677164 ']' 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3677164 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3677164 ']' 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3677164 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3677164 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3677164' 00:27:41.891 killing process with pid 3677164 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3677164 00:27:41.891 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3677164 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.149 18:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.685 18:59:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.685 18:59:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:44.685 00:27:44.685 real 0m43.839s 00:27:44.685 user 2m39.430s 00:27:44.685 sys 0m9.744s 00:27:44.685 18:59:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.685 18:59:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.685 ************************************ 00:27:44.685 END TEST nvmf_perf_adq 00:27:44.685 ************************************ 00:27:44.685 18:59:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:44.685 18:59:32 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:44.685 18:59:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.685 18:59:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.685 18:59:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.685 ************************************ 00:27:44.685 START TEST nvmf_shutdown 00:27:44.685 ************************************ 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:44.685 * Looking for test storage... 00:27:44.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:44.685 ************************************ 00:27:44.685 START TEST nvmf_shutdown_tc1 00:27:44.685 ************************************ 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.685 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.686 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.686 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.686 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.686 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.686 18:59:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.586 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:27:46.587 00:27:46.587 --- 10.0.0.2 ping statistics --- 00:27:46.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.587 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:46.587 00:27:46.587 --- 10.0.0.1 ping statistics --- 00:27:46.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.587 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3680365 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3680365 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3680365 ']' 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.587 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.587 [2024-07-14 18:59:34.686562] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:46.587 [2024-07-14 18:59:34.686631] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.587 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.587 [2024-07-14 18:59:34.755202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.846 [2024-07-14 18:59:34.847536] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.846 [2024-07-14 18:59:34.847600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.846 [2024-07-14 18:59:34.847617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.846 [2024-07-14 18:59:34.847630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.846 [2024-07-14 18:59:34.847641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.846 [2024-07-14 18:59:34.847744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.846 [2024-07-14 18:59:34.847774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.846 [2024-07-14 18:59:34.847839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:46.846 [2024-07-14 18:59:34.847841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.846 18:59:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.846 [2024-07-14 18:59:34.998831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.846 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.846 Malloc1 00:27:47.104 [2024-07-14 18:59:35.088716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.104 Malloc2 00:27:47.104 Malloc3 00:27:47.104 Malloc4 00:27:47.104 Malloc5 00:27:47.104 Malloc6 00:27:47.363 Malloc7 00:27:47.363 Malloc8 00:27:47.363 Malloc9 00:27:47.363 Malloc10 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3680534 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3680534 /var/tmp/bdevperf.sock 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3680534 ']' 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.363 "trsvcid": "$NVMF_PORT", 00:27:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.363 "hdgst": ${hdgst:-false}, 00:27:47.363 "ddgst": ${ddgst:-false} 00:27:47.363 }, 00:27:47.363 "method": "bdev_nvme_attach_controller" 00:27:47.363 } 00:27:47.363 EOF 00:27:47.363 )") 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.363 "trsvcid": "$NVMF_PORT", 00:27:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.363 "hdgst": ${hdgst:-false}, 00:27:47.363 "ddgst": ${ddgst:-false} 00:27:47.363 }, 00:27:47.363 "method": "bdev_nvme_attach_controller" 00:27:47.363 } 00:27:47.363 EOF 00:27:47.363 )") 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.363 "trsvcid": "$NVMF_PORT", 00:27:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.363 "hdgst": ${hdgst:-false}, 00:27:47.363 "ddgst": ${ddgst:-false} 00:27:47.363 }, 00:27:47.363 "method": "bdev_nvme_attach_controller" 00:27:47.363 } 00:27:47.363 EOF 00:27:47.363 )") 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.363 "trsvcid": "$NVMF_PORT", 00:27:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.363 "hdgst": ${hdgst:-false}, 00:27:47.363 "ddgst": ${ddgst:-false} 00:27:47.363 }, 00:27:47.363 "method": "bdev_nvme_attach_controller" 00:27:47.363 } 00:27:47.363 EOF 00:27:47.363 )") 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.363 "trsvcid": "$NVMF_PORT", 00:27:47.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.363 "hdgst": ${hdgst:-false}, 00:27:47.363 "ddgst": ${ddgst:-false} 00:27:47.363 }, 00:27:47.363 "method": "bdev_nvme_attach_controller" 00:27:47.363 } 00:27:47.363 EOF 00:27:47.363 )") 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.363 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.363 { 00:27:47.363 "params": { 00:27:47.363 "name": "Nvme$subsystem", 00:27:47.363 "trtype": "$TEST_TRANSPORT", 00:27:47.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.363 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "$NVMF_PORT", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.364 "hdgst": ${hdgst:-false}, 00:27:47.364 "ddgst": ${ddgst:-false} 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 } 00:27:47.364 EOF 00:27:47.364 )") 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.364 { 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme$subsystem", 00:27:47.364 "trtype": "$TEST_TRANSPORT", 00:27:47.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "$NVMF_PORT", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.364 "hdgst": ${hdgst:-false}, 00:27:47.364 "ddgst": ${ddgst:-false} 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 } 00:27:47.364 EOF 00:27:47.364 )") 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.364 { 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme$subsystem", 00:27:47.364 "trtype": "$TEST_TRANSPORT", 00:27:47.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "$NVMF_PORT", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.364 "hdgst": ${hdgst:-false}, 00:27:47.364 "ddgst": ${ddgst:-false} 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 } 00:27:47.364 EOF 00:27:47.364 )") 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.364 { 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme$subsystem", 00:27:47.364 "trtype": "$TEST_TRANSPORT", 00:27:47.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "$NVMF_PORT", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.364 "hdgst": ${hdgst:-false}, 00:27:47.364 "ddgst": ${ddgst:-false} 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 } 00:27:47.364 EOF 00:27:47.364 )") 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:47.364 { 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme$subsystem", 00:27:47.364 "trtype": "$TEST_TRANSPORT", 00:27:47.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "$NVMF_PORT", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.364 "hdgst": ${hdgst:-false}, 00:27:47.364 "ddgst": ${ddgst:-false} 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 } 00:27:47.364 EOF 00:27:47.364 )") 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:47.364 18:59:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme1", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme2", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme3", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme4", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme5", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme6", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme7", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme8", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme9", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 },{ 00:27:47.364 "params": { 00:27:47.364 "name": "Nvme10", 00:27:47.364 "trtype": "tcp", 00:27:47.364 "traddr": "10.0.0.2", 00:27:47.364 "adrfam": "ipv4", 00:27:47.364 "trsvcid": "4420", 00:27:47.364 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:47.364 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:47.364 "hdgst": false, 00:27:47.364 "ddgst": false 00:27:47.364 }, 00:27:47.364 "method": "bdev_nvme_attach_controller" 00:27:47.364 }' 00:27:47.622 [2024-07-14 18:59:35.593946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:47.622 [2024-07-14 18:59:35.594026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:47.622 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.622 [2024-07-14 18:59:35.659647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.622 [2024-07-14 18:59:35.746371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3680534 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:49.520 18:59:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:50.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3680534 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:50.453 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3680365 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.454 { 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme$subsystem", 00:27:50.454 "trtype": "$TEST_TRANSPORT", 00:27:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "$NVMF_PORT", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.454 "hdgst": ${hdgst:-false}, 00:27:50.454 "ddgst": ${ddgst:-false} 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 } 00:27:50.454 EOF 00:27:50.454 )") 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:50.454 18:59:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme1", 00:27:50.454 "trtype": "tcp", 00:27:50.454 "traddr": "10.0.0.2", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "4420", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.454 "hdgst": false, 00:27:50.454 "ddgst": false 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 },{ 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme2", 00:27:50.454 "trtype": "tcp", 00:27:50.454 "traddr": "10.0.0.2", 00:27:50.454 "adrfam": "ipv4", 00:27:50.454 "trsvcid": "4420", 00:27:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:50.454 "hdgst": false, 00:27:50.454 "ddgst": false 00:27:50.454 }, 00:27:50.454 "method": "bdev_nvme_attach_controller" 00:27:50.454 },{ 00:27:50.454 "params": { 00:27:50.454 "name": "Nvme3", 00:27:50.454 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme4", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme5", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme6", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme7", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme8", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme9", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 },{ 00:27:50.455 "params": { 00:27:50.455 "name": "Nvme10", 00:27:50.455 "trtype": "tcp", 00:27:50.455 "traddr": "10.0.0.2", 00:27:50.455 "adrfam": "ipv4", 00:27:50.455 "trsvcid": "4420", 00:27:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:50.455 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:50.455 "hdgst": false, 00:27:50.455 "ddgst": false 00:27:50.455 }, 00:27:50.455 "method": "bdev_nvme_attach_controller" 00:27:50.455 }' 00:27:50.455 [2024-07-14 18:59:38.629339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:50.455 [2024-07-14 18:59:38.629430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3680949 ] 00:27:50.455 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.713 [2024-07-14 18:59:38.695393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.713 [2024-07-14 18:59:38.785735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.609 Running I/O for 1 seconds... 00:27:53.542 00:27:53.542 Latency(us) 00:27:53.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme1n1 : 1.17 219.51 13.72 0.00 0.00 288844.04 19223.89 265639.25 00:27:53.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme2n1 : 1.06 181.79 11.36 0.00 0.00 342310.31 25243.50 278066.82 00:27:53.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme3n1 : 1.06 241.58 15.10 0.00 0.00 252188.44 18252.99 257872.02 00:27:53.542 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme4n1 : 1.12 229.37 14.34 0.00 0.00 262353.16 19903.53 262532.36 00:27:53.542 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme5n1 : 1.18 271.84 16.99 0.00 0.00 218375.66 17961.72 237677.23 00:27:53.542 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme6n1 : 1.18 216.69 13.54 0.00 0.00 269551.88 21456.97 306028.85 00:27:53.542 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme7n1 : 1.16 224.39 14.02 0.00 0.00 254616.32 2924.85 260978.92 00:27:53.542 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme8n1 : 1.18 270.05 16.88 0.00 0.00 208679.90 6650.69 265639.25 00:27:53.542 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme9n1 : 1.16 224.94 14.06 0.00 0.00 244356.89 4296.25 262532.36 00:27:53.542 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.542 Verification LBA range: start 0x0 length 0x400 00:27:53.542 Nvme10n1 : 1.17 218.95 13.68 0.00 0.00 248550.21 22330.79 259425.47 00:27:53.542 =================================================================================================================== 00:27:53.542 Total : 2299.10 143.69 0.00 0.00 254713.11 2924.85 306028.85 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.542 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.799 rmmod nvme_tcp 00:27:53.799 rmmod nvme_fabrics 00:27:53.799 rmmod nvme_keyring 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3680365 ']' 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3680365 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3680365 ']' 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3680365 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3680365 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3680365' 00:27:53.799 killing process with pid 3680365 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3680365 00:27:53.799 18:59:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3680365 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.365 18:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.264 00:27:56.264 real 0m11.890s 00:27:56.264 user 0m34.623s 00:27:56.264 sys 0m3.190s 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:56.264 ************************************ 00:27:56.264 END TEST nvmf_shutdown_tc1 00:27:56.264 ************************************ 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:56.264 ************************************ 00:27:56.264 START TEST nvmf_shutdown_tc2 00:27:56.264 ************************************ 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.264 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:56.265 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:56.265 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:56.265 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:56.265 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.265 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:27:56.522 00:27:56.522 --- 10.0.0.2 ping statistics --- 00:27:56.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.522 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:56.522 00:27:56.522 --- 10.0.0.1 ping statistics --- 00:27:56.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.522 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3681713 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3681713 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3681713 ']' 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.522 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.522 [2024-07-14 18:59:44.647735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:56.522 [2024-07-14 18:59:44.647834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.522 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.522 [2024-07-14 18:59:44.718214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.779 [2024-07-14 18:59:44.809654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.779 [2024-07-14 18:59:44.809716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.779 [2024-07-14 18:59:44.809742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.779 [2024-07-14 18:59:44.809756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.779 [2024-07-14 18:59:44.809768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.779 [2024-07-14 18:59:44.809864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.779 [2024-07-14 18:59:44.809964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.779 [2024-07-14 18:59:44.810032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.779 [2024-07-14 18:59:44.810030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.779 [2024-07-14 18:59:44.960700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.779 18:59:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.036 Malloc1 00:27:57.036 [2024-07-14 18:59:45.042306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.036 Malloc2 00:27:57.036 Malloc3 00:27:57.036 Malloc4 00:27:57.036 Malloc5 00:27:57.036 Malloc6 00:27:57.294 Malloc7 00:27:57.294 Malloc8 00:27:57.294 Malloc9 00:27:57.294 Malloc10 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3681885 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3681885 /var/tmp/bdevperf.sock 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3681885 ']' 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:57.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.294 "method": "bdev_nvme_attach_controller" 00:27:57.294 } 00:27:57.294 EOF 00:27:57.294 )") 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.294 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.294 { 00:27:57.294 "params": { 00:27:57.294 "name": "Nvme$subsystem", 00:27:57.294 "trtype": "$TEST_TRANSPORT", 00:27:57.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.294 "adrfam": "ipv4", 00:27:57.294 "trsvcid": "$NVMF_PORT", 00:27:57.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.294 "hdgst": ${hdgst:-false}, 00:27:57.294 "ddgst": ${ddgst:-false} 00:27:57.294 }, 00:27:57.295 "method": "bdev_nvme_attach_controller" 00:27:57.295 } 00:27:57.295 EOF 00:27:57.295 )") 00:27:57.295 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.552 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.552 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.552 { 00:27:57.552 "params": { 00:27:57.552 "name": "Nvme$subsystem", 00:27:57.553 "trtype": "$TEST_TRANSPORT", 00:27:57.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "$NVMF_PORT", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.553 "hdgst": ${hdgst:-false}, 00:27:57.553 "ddgst": ${ddgst:-false} 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 } 00:27:57.553 EOF 00:27:57.553 )") 00:27:57.553 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:57.553 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:57.553 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:57.553 18:59:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme1", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme2", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme3", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme4", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme5", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme6", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme7", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme8", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme9", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 },{ 00:27:57.553 "params": { 00:27:57.553 "name": "Nvme10", 00:27:57.553 "trtype": "tcp", 00:27:57.553 "traddr": "10.0.0.2", 00:27:57.553 "adrfam": "ipv4", 00:27:57.553 "trsvcid": "4420", 00:27:57.553 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:57.553 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:57.553 "hdgst": false, 00:27:57.553 "ddgst": false 00:27:57.553 }, 00:27:57.553 "method": "bdev_nvme_attach_controller" 00:27:57.553 }' 00:27:57.553 [2024-07-14 18:59:45.536661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:57.553 [2024-07-14 18:59:45.536757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681885 ] 00:27:57.553 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.553 [2024-07-14 18:59:45.602783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.553 [2024-07-14 18:59:45.689667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.925 Running I/O for 10 seconds... 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:59.490 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:59.749 18:59:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3681885 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3681885 ']' 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3681885 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3681885 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3681885' 00:28:00.007 killing process with pid 3681885 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3681885 00:28:00.007 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3681885 00:28:00.265 Received shutdown signal, test time was about 1.082151 seconds 00:28:00.265 00:28:00.265 Latency(us) 00:28:00.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.265 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme1n1 : 1.04 184.25 11.52 0.00 0.00 343517.61 20680.25 281173.71 00:28:00.265 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme2n1 : 1.07 238.53 14.91 0.00 0.00 260153.46 19126.80 233016.89 00:28:00.265 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme3n1 : 1.05 244.02 15.25 0.00 0.00 250354.54 19612.25 250104.79 00:28:00.265 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme4n1 : 1.06 241.50 15.09 0.00 0.00 248513.61 18350.08 260978.92 00:28:00.265 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme5n1 : 1.07 239.29 14.96 0.00 0.00 246415.55 17864.63 265639.25 00:28:00.265 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme6n1 : 1.06 242.16 15.13 0.00 0.00 238705.40 20777.34 251658.24 00:28:00.265 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme7n1 : 1.07 240.28 15.02 0.00 0.00 236377.13 17670.45 262532.36 00:28:00.265 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme8n1 : 1.08 237.52 14.84 0.00 0.00 234958.13 17282.09 260978.92 00:28:00.265 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme9n1 : 1.08 236.74 14.80 0.00 0.00 231385.51 20971.52 265639.25 00:28:00.265 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.265 Verification LBA range: start 0x0 length 0x400 00:28:00.265 Nvme10n1 : 1.04 184.91 11.56 0.00 0.00 288043.36 18835.53 285834.05 00:28:00.265 =================================================================================================================== 00:28:00.265 Total : 2289.20 143.07 0.00 0.00 254793.06 17282.09 285834.05 00:28:00.265 18:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3681713 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.671 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.671 rmmod nvme_tcp 00:28:01.672 rmmod nvme_fabrics 00:28:01.672 rmmod nvme_keyring 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3681713 ']' 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3681713 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3681713 ']' 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3681713 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3681713 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3681713' 00:28:01.672 killing process with pid 3681713 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3681713 00:28:01.672 18:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3681713 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.929 18:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.461 00:28:04.461 real 0m7.676s 00:28:04.461 user 0m23.241s 00:28:04.461 sys 0m1.496s 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.461 ************************************ 00:28:04.461 END TEST nvmf_shutdown_tc2 00:28:04.461 ************************************ 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:04.461 ************************************ 00:28:04.461 START TEST nvmf_shutdown_tc3 00:28:04.461 ************************************ 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.461 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:28:04.462 00:28:04.462 --- 10.0.0.2 ping statistics --- 00:28:04.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.462 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:28:04.462 00:28:04.462 --- 10.0.0.1 ping statistics --- 00:28:04.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.462 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3682793 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3682793 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3682793 ']' 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.462 [2024-07-14 18:59:52.363089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:04.462 [2024-07-14 18:59:52.363178] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.462 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.462 [2024-07-14 18:59:52.433244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.462 [2024-07-14 18:59:52.524659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.462 [2024-07-14 18:59:52.524722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.462 [2024-07-14 18:59:52.524747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.462 [2024-07-14 18:59:52.524761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.462 [2024-07-14 18:59:52.524772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.462 [2024-07-14 18:59:52.524886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.462 [2024-07-14 18:59:52.525005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.462 [2024-07-14 18:59:52.525073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.462 [2024-07-14 18:59:52.525071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.462 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.462 [2024-07-14 18:59:52.681816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.721 18:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.721 Malloc1 00:28:04.721 [2024-07-14 18:59:52.771562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.721 Malloc2 00:28:04.721 Malloc3 00:28:04.721 Malloc4 00:28:04.721 Malloc5 00:28:04.980 Malloc6 00:28:04.980 Malloc7 00:28:04.980 Malloc8 00:28:04.980 Malloc9 00:28:04.980 Malloc10 00:28:04.980 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.980 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:04.980 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.980 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3682975 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3682975 /var/tmp/bdevperf.sock 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3682975 ']' 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.238 EOF 00:28:05.238 )") 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:05.238 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:05.238 { 00:28:05.238 "params": { 00:28:05.238 "name": "Nvme$subsystem", 00:28:05.238 "trtype": "$TEST_TRANSPORT", 00:28:05.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.238 "adrfam": "ipv4", 00:28:05.238 "trsvcid": "$NVMF_PORT", 00:28:05.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.238 "hdgst": ${hdgst:-false}, 00:28:05.238 "ddgst": ${ddgst:-false} 00:28:05.238 }, 00:28:05.238 "method": "bdev_nvme_attach_controller" 00:28:05.238 } 00:28:05.239 EOF 00:28:05.239 )") 00:28:05.239 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:05.239 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:05.239 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:05.239 18:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme1", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme2", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme3", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme4", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme5", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme6", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme7", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme8", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme9", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 },{ 00:28:05.239 "params": { 00:28:05.239 "name": "Nvme10", 00:28:05.239 "trtype": "tcp", 00:28:05.239 "traddr": "10.0.0.2", 00:28:05.239 "adrfam": "ipv4", 00:28:05.239 "trsvcid": "4420", 00:28:05.239 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:05.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:05.239 "hdgst": false, 00:28:05.239 "ddgst": false 00:28:05.239 }, 00:28:05.239 "method": "bdev_nvme_attach_controller" 00:28:05.239 }' 00:28:05.239 [2024-07-14 18:59:53.266564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:05.239 [2024-07-14 18:59:53.266655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682975 ] 00:28:05.239 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.239 [2024-07-14 18:59:53.331961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.239 [2024-07-14 18:59:53.418601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.136 Running I/O for 10 seconds... 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:07.136 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:07.394 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.652 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3682793 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3682793 ']' 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3682793 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3682793 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3682793' 00:28:07.925 killing process with pid 3682793 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3682793 00:28:07.925 18:59:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3682793 00:28:07.925 [2024-07-14 18:59:55.937989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.925 [2024-07-14 18:59:55.938452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.938936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8790 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.926 [2024-07-14 18:59:55.940989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.941136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eb190 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb8b0 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 18:59:55.943471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.927 [2024-07-14 18:59:55.943550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.927 [2024-07-14 18:59:55.943563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1538290 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.943997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.927 [2024-07-14 18:59:55.944177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.944275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c30 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946638] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.928 [2024-07-14 18:59:55.946663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946716] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.928 [2024-07-14 18:59:55.946725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.946991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.947700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e90d0 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.928 [2024-07-14 18:59:55.949471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949702] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.929 [2024-07-14 18:59:55.949716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.949996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.950218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9590 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.929 [2024-07-14 18:59:55.951409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.951990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9a30 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.952675] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.930 [2024-07-14 18:59:55.953256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e10 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.953436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bb8b0 (9): Bad file descriptor 00:28:07.930 [2024-07-14 18:59:55.953526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1574b50 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.953694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558830 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.953850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1538290 (9): Bad file descriptor 00:28:07.930 [2024-07-14 18:59:55.953906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.953976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.953990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.954005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.930 [2024-07-14 18:59:55.954018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.930 [2024-07-14 18:59:55.954031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c700 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.954057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.954086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.954101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.930 [2024-07-14 18:59:55.954113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954141] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.931 [2024-07-14 18:59:55.954127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1the state(5) to be set 00:28:07.931 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 18:59:55.954374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-07-14 18:59:55.954426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 18:59:55.954440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-07-14 18:59:55.954494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1the state(5) to be set 00:28:07.931 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-07-14 18:59:55.954630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:07.931 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-07-14 18:59:55.954738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:07.931 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-07-14 18:59:55.954805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:07.931 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 [2024-07-14 18:59:55.954847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.931 [2024-07-14 18:59:55.954860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.931 [2024-07-14 18:59:55.954870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-07-14 18:59:55.954872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.931 the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.954894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:07.932 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.954913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.954917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.954931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with [2024-07-14 18:59:55.954932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:07.932 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.954946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.954950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.954959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea390 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.954965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.954982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.954996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.955978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.955994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.956008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.956024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.956054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.956055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.956073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.956083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.956089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 [2024-07-14 18:59:55.956097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.956104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.932 [2024-07-14 18:59:55.956110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.932 [2024-07-14 18:59:55.956121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:12[2024-07-14 18:59:55.956122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.932 the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-14 18:59:55.956138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12[2024-07-14 18:59:55.956221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.933 [2024-07-14 18:59:55.956440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.933 [2024-07-14 18:59:55.956453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956531] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16a2ff0 was disconnected and freed. reset controller. 00:28:07.933 [2024-07-14 18:59:55.956540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.956988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ea830 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.933 [2024-07-14 18:59:55.957939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.957951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.957963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.957975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.957987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.957999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controlle[2024-07-14 18:59:55.958243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with r 00:28:07.934 the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with [2024-07-14 18:59:55.958297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030610 (9): the state(5) to be set 00:28:07.934 Bad file descriptor 00:28:07.934 [2024-07-14 18:59:55.958315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.934 [2024-07-14 18:59:55.958328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.958617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23eacd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.959256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.935 [2024-07-14 18:59:55.959286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030610 with addr=10.0.0.2, port=4420 00:28:07.935 [2024-07-14 18:59:55.959303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030610 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.959377] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.935 [2024-07-14 18:59:55.959455] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.935 [2024-07-14 18:59:55.959607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030610 (9): Bad file descriptor 00:28:07.935 [2024-07-14 18:59:55.959731] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.935 [2024-07-14 18:59:55.959821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:07.935 [2024-07-14 18:59:55.959843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:07.935 [2024-07-14 18:59:55.959861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:07.935 [2024-07-14 18:59:55.960012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.935 [2024-07-14 18:59:55.960201] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:07.935 [2024-07-14 18:59:55.963264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703e10 (9): Bad file descriptor 00:28:07.935 [2024-07-14 18:59:55.963325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158e910 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.963532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1591370 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.963702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.935 [2024-07-14 18:59:55.963817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.963830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155bfd0 is same with the state(5) to be set 00:28:07.935 [2024-07-14 18:59:55.963860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1574b50 (9): Bad file descriptor 00:28:07.935 [2024-07-14 18:59:55.963904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558830 (9): Bad file descriptor 00:28:07.935 [2024-07-14 18:59:55.963946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c700 (9): Bad file descriptor 00:28:07.935 [2024-07-14 18:59:55.964091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.935 [2024-07-14 18:59:55.964450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.935 [2024-07-14 18:59:55.964466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.964970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.936 [2024-07-14 18:59:55.965823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.936 [2024-07-14 18:59:55.965837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.965853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.965867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.965890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.965906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.965927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.965942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.986304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.986321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8a70 is same with the state(5) to be set 00:28:07.937 [2024-07-14 18:59:55.987764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.987969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.987986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.937 [2024-07-14 18:59:55.988772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.937 [2024-07-14 18:59:55.988788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.988983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.988999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.938 [2024-07-14 18:59:55.989830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.938 [2024-07-14 18:59:55.989845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a7e20 is same with the state(5) to be set 00:28:07.938 [2024-07-14 18:59:55.991479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.938 [2024-07-14 18:59:55.991510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:07.938 [2024-07-14 18:59:55.991605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158e910 (9): Bad file descriptor 00:28:07.938 [2024-07-14 18:59:55.991646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591370 (9): Bad file descriptor 00:28:07.938 [2024-07-14 18:59:55.991682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155bfd0 (9): Bad file descriptor 00:28:07.938 [2024-07-14 18:59:55.992121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.938 [2024-07-14 18:59:55.992153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1538290 with addr=10.0.0.2, port=4420 00:28:07.938 [2024-07-14 18:59:55.992171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1538290 is same with the state(5) to be set 00:28:07.938 [2024-07-14 18:59:55.992282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.938 [2024-07-14 18:59:55.992308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bb8b0 with addr=10.0.0.2, port=4420 00:28:07.939 [2024-07-14 18:59:55.992324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb8b0 is same with the state(5) to be set 00:28:07.939 [2024-07-14 18:59:55.992657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.992968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.992985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.993974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.993988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.994005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.939 [2024-07-14 18:59:55.994019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.939 [2024-07-14 18:59:55.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.994706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.994725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0d40 is same with the state(5) to be set 00:28:07.940 [2024-07-14 18:59:55.995989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.940 [2024-07-14 18:59:55.996614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.940 [2024-07-14 18:59:55.996631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.996985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.996999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.941 [2024-07-14 18:59:55.997964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.941 [2024-07-14 18:59:55.997979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.997995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.998009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.998024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2160 is same with the state(5) to be set 00:28:07.942 [2024-07-14 18:59:55.999279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:55.999979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:55.999995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.942 [2024-07-14 18:59:56.000529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.942 [2024-07-14 18:59:56.000544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.000972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.000989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.001317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.001332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532a00 is same with the state(5) to be set 00:28:07.943 [2024-07-14 18:59:56.002559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.943 [2024-07-14 18:59:56.002972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.943 [2024-07-14 18:59:56.002987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.003983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.003997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.944 [2024-07-14 18:59:56.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.944 [2024-07-14 18:59:56.004341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.004358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.004372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.004389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.004403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.004438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.004455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.011576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.011644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.011662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.011680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.011694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.011711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.011727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.011745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.011759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.011775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1533eb0 is same with the state(5) to be set 00:28:07.945 [2024-07-14 18:59:56.013398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:07.945 [2024-07-14 18:59:56.013438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:07.945 [2024-07-14 18:59:56.013457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:07.945 [2024-07-14 18:59:56.013475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:07.945 [2024-07-14 18:59:56.013546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1538290 (9): Bad file descriptor 00:28:07.945 [2024-07-14 18:59:56.013572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bb8b0 (9): Bad file descriptor 00:28:07.945 [2024-07-14 18:59:56.013656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.945 [2024-07-14 18:59:56.013684] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.945 [2024-07-14 18:59:56.013705] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.945 [2024-07-14 18:59:56.013817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:07.945 [2024-07-14 18:59:56.014133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.945 [2024-07-14 18:59:56.014168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030610 with addr=10.0.0.2, port=4420 00:28:07.945 [2024-07-14 18:59:56.014187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030610 is same with the state(5) to be set 00:28:07.945 [2024-07-14 18:59:56.014289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.945 [2024-07-14 18:59:56.014315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1703e10 with addr=10.0.0.2, port=4420 00:28:07.945 [2024-07-14 18:59:56.014341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703e10 is same with the state(5) to be set 00:28:07.945 [2024-07-14 18:59:56.014448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.945 [2024-07-14 18:59:56.014473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155c700 with addr=10.0.0.2, port=4420 00:28:07.945 [2024-07-14 18:59:56.014490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155c700 is same with the state(5) to be set 00:28:07.945 [2024-07-14 18:59:56.014588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.945 [2024-07-14 18:59:56.014614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574b50 with addr=10.0.0.2, port=4420 00:28:07.945 [2024-07-14 18:59:56.014630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1574b50 is same with the state(5) to be set 00:28:07.945 [2024-07-14 18:59:56.014645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.945 [2024-07-14 18:59:56.014659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.945 [2024-07-14 18:59:56.014675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.945 [2024-07-14 18:59:56.014697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:07.945 [2024-07-14 18:59:56.014712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:07.945 [2024-07-14 18:59:56.014726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:07.945 [2024-07-14 18:59:56.015824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.015851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.015874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.015923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.015938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.015955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.015970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.015986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.945 [2024-07-14 18:59:56.016514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.945 [2024-07-14 18:59:56.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.016980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.946 [2024-07-14 18:59:56.017855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.946 [2024-07-14 18:59:56.017870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.017891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a44a0 is same with the state(5) to be set 00:28:07.947 [2024-07-14 18:59:56.019148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.019975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.019991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.947 [2024-07-14 18:59:56.020294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.947 [2024-07-14 18:59:56.020311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.020975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.020990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.021179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.021195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5950 is same with the state(5) to be set 00:28:07.948 [2024-07-14 18:59:56.022414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.948 [2024-07-14 18:59:56.022870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.948 [2024-07-14 18:59:56.022892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.022909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.022924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.022960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.022976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.022991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.023992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.949 [2024-07-14 18:59:56.024166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.949 [2024-07-14 18:59:56.024183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.950 [2024-07-14 18:59:56.024448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.950 [2024-07-14 18:59:56.024463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a6a60 is same with the state(5) to be set 00:28:07.950 [2024-07-14 18:59:56.026815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.950 [2024-07-14 18:59:56.026843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.950 [2024-07-14 18:59:56.026861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:07.950 [2024-07-14 18:59:56.026887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:07.950 task offset: 30720 on job bdev=Nvme6n1 fails 00:28:07.950 00:28:07.950 Latency(us) 00:28:07.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.950 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme1n1 ended in about 0.91 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme1n1 : 0.91 140.24 8.76 70.12 0.00 300863.34 23107.51 259425.47 00:28:07.950 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme2n1 : 0.92 138.96 8.69 69.48 0.00 297385.53 24758.04 270299.59 00:28:07.950 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme3n1 ended in about 0.92 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme3n1 : 0.92 207.71 12.98 69.24 0.00 219186.44 31068.92 237677.23 00:28:07.950 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme4n1 ended in about 0.93 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme4n1 : 0.93 206.97 12.94 68.99 0.00 215395.18 19515.16 251658.24 00:28:07.950 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme5n1 ended in about 0.94 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme5n1 : 0.94 136.44 8.53 68.22 0.00 284599.44 19223.89 268746.15 00:28:07.950 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme6n1 ended in about 0.88 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme6n1 : 0.88 221.89 13.87 72.46 0.00 192094.23 4126.34 257872.02 00:28:07.950 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme7n1 ended in about 0.94 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme7n1 : 0.94 135.56 8.47 67.78 0.00 274450.46 21456.97 273406.48 00:28:07.950 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme8n1 ended in about 0.95 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme8n1 : 0.95 135.10 8.44 67.55 0.00 269538.54 19126.80 251658.24 00:28:07.950 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme9n1 ended in about 0.95 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme9n1 : 0.95 138.84 8.68 67.32 0.00 259378.91 22233.69 282727.16 00:28:07.950 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:07.950 Job: Nvme10n1 ended in about 0.92 seconds with error 00:28:07.950 Verification LBA range: start 0x0 length 0x400 00:28:07.950 Nvme10n1 : 0.92 139.70 8.73 69.85 0.00 247517.23 19709.35 253211.69 00:28:07.950 =================================================================================================================== 00:28:07.950 Total : 1601.41 100.09 691.00 0.00 251656.48 4126.34 282727.16 00:28:07.950 [2024-07-14 18:59:56.053477] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:07.950 [2024-07-14 18:59:56.053570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:07.950 [2024-07-14 18:59:56.053896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.950 [2024-07-14 18:59:56.053934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1558830 with addr=10.0.0.2, port=4420 00:28:07.950 [2024-07-14 18:59:56.053955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558830 is same with the state(5) to be set 00:28:07.950 [2024-07-14 18:59:56.053985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030610 (9): Bad file descriptor 00:28:07.950 [2024-07-14 18:59:56.054022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703e10 (9): Bad file descriptor 00:28:07.950 [2024-07-14 18:59:56.054042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155c700 (9): Bad file descriptor 00:28:07.950 [2024-07-14 18:59:56.054061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1574b50 (9): Bad file descriptor 00:28:07.950 [2024-07-14 18:59:56.054137] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.054165] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.054187] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.054207] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.054228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1558830 (9): Bad file descriptor 00:28:07.950 [2024-07-14 18:59:56.054531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.950 [2024-07-14 18:59:56.054562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155bfd0 with addr=10.0.0.2, port=4420 00:28:07.950 [2024-07-14 18:59:56.054580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155bfd0 is same with the state(5) to be set 00:28:07.950 [2024-07-14 18:59:56.054681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.950 [2024-07-14 18:59:56.054708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158e910 with addr=10.0.0.2, port=4420 00:28:07.950 [2024-07-14 18:59:56.054724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158e910 is same with the state(5) to be set 00:28:07.950 [2024-07-14 18:59:56.054826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.950 [2024-07-14 18:59:56.054852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1591370 with addr=10.0.0.2, port=4420 00:28:07.950 [2024-07-14 18:59:56.054868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1591370 is same with the state(5) to be set 00:28:07.950 [2024-07-14 18:59:56.054892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:07.950 [2024-07-14 18:59:56.054907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:07.950 [2024-07-14 18:59:56.054934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:07.950 [2024-07-14 18:59:56.054957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:07.950 [2024-07-14 18:59:56.054973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:07.950 [2024-07-14 18:59:56.054986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:07.950 [2024-07-14 18:59:56.055004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:07.950 [2024-07-14 18:59:56.055018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:07.950 [2024-07-14 18:59:56.055032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:07.950 [2024-07-14 18:59:56.055049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:07.950 [2024-07-14 18:59:56.055063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:07.950 [2024-07-14 18:59:56.055077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:07.950 [2024-07-14 18:59:56.055118] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055143] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055162] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055181] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055200] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055218] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.950 [2024-07-14 18:59:56.055236] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:07.951 [2024-07-14 18:59:56.056074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:07.951 [2024-07-14 18:59:56.056103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.951 [2024-07-14 18:59:56.056148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155bfd0 (9): Bad file descriptor 00:28:07.951 [2024-07-14 18:59:56.056245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158e910 (9): Bad file descriptor 00:28:07.951 [2024-07-14 18:59:56.056265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591370 (9): Bad file descriptor 00:28:07.951 [2024-07-14 18:59:56.056281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.056294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.056308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:07.951 [2024-07-14 18:59:56.056380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.951 [2024-07-14 18:59:56.056526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bb8b0 with addr=10.0.0.2, port=4420 00:28:07.951 [2024-07-14 18:59:56.056543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb8b0 is same with the state(5) to be set 00:28:07.951 [2024-07-14 18:59:56.056637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.951 [2024-07-14 18:59:56.056663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1538290 with addr=10.0.0.2, port=4420 00:28:07.951 [2024-07-14 18:59:56.056679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1538290 is same with the state(5) to be set 00:28:07.951 [2024-07-14 18:59:56.056695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.056708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.056721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:07.951 [2024-07-14 18:59:56.056740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.056755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.056775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:07.951 [2024-07-14 18:59:56.056792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.056807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.056820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:07.951 [2024-07-14 18:59:56.056921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.056972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bb8b0 (9): Bad file descriptor 00:28:07.951 [2024-07-14 18:59:56.056992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1538290 (9): Bad file descriptor 00:28:07.951 [2024-07-14 18:59:56.057031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.057050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.057065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:07.951 [2024-07-14 18:59:56.057083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.951 [2024-07-14 18:59:56.057097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.951 [2024-07-14 18:59:56.057111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.951 [2024-07-14 18:59:56.057151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.951 [2024-07-14 18:59:56.057169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.517 18:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:08.517 18:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3682975 00:28:09.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3682975) - No such process 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:09.451 rmmod nvme_tcp 00:28:09.451 rmmod nvme_fabrics 00:28:09.451 rmmod nvme_keyring 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.451 18:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.985 00:28:11.985 real 0m7.455s 00:28:11.985 user 0m18.176s 00:28:11.985 sys 0m1.465s 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:11.985 ************************************ 00:28:11.985 END TEST nvmf_shutdown_tc3 00:28:11.985 ************************************ 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:11.985 00:28:11.985 real 0m27.238s 00:28:11.985 user 1m16.116s 00:28:11.985 sys 0m6.307s 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.985 18:59:59 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:11.985 ************************************ 00:28:11.985 END TEST nvmf_shutdown 00:28:11.985 ************************************ 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:11.985 18:59:59 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.985 18:59:59 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.985 18:59:59 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:11.985 18:59:59 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.985 18:59:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.985 ************************************ 00:28:11.985 START TEST nvmf_multicontroller 00:28:11.985 ************************************ 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:11.985 * Looking for test storage... 00:28:11.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:11.985 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.986 18:59:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:13.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:13.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:13.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.381 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:13.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.382 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:13.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:28:13.639 00:28:13.639 --- 10.0.0.2 ping statistics --- 00:28:13.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.639 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:13.639 00:28:13.639 --- 10.0.0.1 ping statistics --- 00:28:13.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.639 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3685468 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3685468 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3685468 ']' 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.639 19:00:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:13.640 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.640 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.640 19:00:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.640 [2024-07-14 19:00:01.805298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:13.640 [2024-07-14 19:00:01.805383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.640 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.897 [2024-07-14 19:00:01.876036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:13.897 [2024-07-14 19:00:01.974424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.897 [2024-07-14 19:00:01.974487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.897 [2024-07-14 19:00:01.974502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.897 [2024-07-14 19:00:01.974515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.897 [2024-07-14 19:00:01.974527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.897 [2024-07-14 19:00:01.974633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.897 [2024-07-14 19:00:01.974662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.897 [2024-07-14 19:00:01.974663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.897 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.897 [2024-07-14 19:00:02.119379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 Malloc0 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 [2024-07-14 19:00:02.185399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 [2024-07-14 19:00:02.193249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 Malloc1 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3685624 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3685624 /var/tmp/bdevperf.sock 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3685624 ']' 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:14.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.154 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.414 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:14.414 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:14.414 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.414 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 NVMe0n1 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.680 1 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 request: 00:28:14.680 { 00:28:14.680 "name": "NVMe0", 00:28:14.680 "trtype": "tcp", 00:28:14.680 "traddr": "10.0.0.2", 00:28:14.680 "adrfam": "ipv4", 00:28:14.680 "trsvcid": "4420", 00:28:14.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.680 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:14.680 "hostaddr": "10.0.0.2", 00:28:14.680 "hostsvcid": "60000", 00:28:14.680 "prchk_reftag": false, 00:28:14.680 "prchk_guard": false, 00:28:14.680 "hdgst": false, 00:28:14.680 "ddgst": false, 00:28:14.680 "method": "bdev_nvme_attach_controller", 00:28:14.680 "req_id": 1 00:28:14.680 } 00:28:14.680 Got JSON-RPC error response 00:28:14.680 response: 00:28:14.680 { 00:28:14.680 "code": -114, 00:28:14.680 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:14.680 } 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 request: 00:28:14.680 { 00:28:14.680 "name": "NVMe0", 00:28:14.680 "trtype": "tcp", 00:28:14.680 "traddr": "10.0.0.2", 00:28:14.680 "adrfam": "ipv4", 00:28:14.680 "trsvcid": "4420", 00:28:14.680 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:14.680 "hostaddr": "10.0.0.2", 00:28:14.680 "hostsvcid": "60000", 00:28:14.680 "prchk_reftag": false, 00:28:14.680 "prchk_guard": false, 00:28:14.680 "hdgst": false, 00:28:14.680 "ddgst": false, 00:28:14.680 "method": "bdev_nvme_attach_controller", 00:28:14.680 "req_id": 1 00:28:14.680 } 00:28:14.680 Got JSON-RPC error response 00:28:14.680 response: 00:28:14.680 { 00:28:14.680 "code": -114, 00:28:14.680 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:14.680 } 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 request: 00:28:14.680 { 00:28:14.680 "name": "NVMe0", 00:28:14.680 "trtype": "tcp", 00:28:14.680 "traddr": "10.0.0.2", 00:28:14.680 "adrfam": "ipv4", 00:28:14.680 "trsvcid": "4420", 00:28:14.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.680 "hostaddr": "10.0.0.2", 00:28:14.680 "hostsvcid": "60000", 00:28:14.680 "prchk_reftag": false, 00:28:14.680 "prchk_guard": false, 00:28:14.680 "hdgst": false, 00:28:14.680 "ddgst": false, 00:28:14.680 "multipath": "disable", 00:28:14.680 "method": "bdev_nvme_attach_controller", 00:28:14.680 "req_id": 1 00:28:14.680 } 00:28:14.680 Got JSON-RPC error response 00:28:14.680 response: 00:28:14.680 { 00:28:14.680 "code": -114, 00:28:14.680 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:14.680 } 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.680 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.680 request: 00:28:14.680 { 00:28:14.680 "name": "NVMe0", 00:28:14.680 "trtype": "tcp", 00:28:14.680 "traddr": "10.0.0.2", 00:28:14.680 "adrfam": "ipv4", 00:28:14.680 "trsvcid": "4420", 00:28:14.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.681 "hostaddr": "10.0.0.2", 00:28:14.681 "hostsvcid": "60000", 00:28:14.681 "prchk_reftag": false, 00:28:14.681 "prchk_guard": false, 00:28:14.681 "hdgst": false, 00:28:14.681 "ddgst": false, 00:28:14.681 "multipath": "failover", 00:28:14.681 "method": "bdev_nvme_attach_controller", 00:28:14.681 "req_id": 1 00:28:14.681 } 00:28:14.681 Got JSON-RPC error response 00:28:14.681 response: 00:28:14.681 { 00:28:14.681 "code": -114, 00:28:14.681 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:14.681 } 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.681 19:00:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.938 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.938 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.194 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:15.194 19:00:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:16.564 0 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3685624 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3685624 ']' 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3685624 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3685624 00:28:16.564 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3685624' 00:28:16.565 killing process with pid 3685624 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3685624 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3685624 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:16.565 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:16.565 [2024-07-14 19:00:02.299097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:16.565 [2024-07-14 19:00:02.299180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685624 ] 00:28:16.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.565 [2024-07-14 19:00:02.359923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.565 [2024-07-14 19:00:02.445968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.565 [2024-07-14 19:00:03.332184] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name cacfb314-aefe-4e81-af06-8a24e73b1f5e already exists 00:28:16.565 [2024-07-14 19:00:03.332221] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:cacfb314-aefe-4e81-af06-8a24e73b1f5e alias for bdev NVMe1n1 00:28:16.565 [2024-07-14 19:00:03.332245] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:16.565 Running I/O for 1 seconds... 00:28:16.565 00:28:16.565 Latency(us) 00:28:16.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.565 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:16.565 NVMe0n1 : 1.00 18833.39 73.57 0.00 0.00 6785.44 3713.71 11796.48 00:28:16.565 =================================================================================================================== 00:28:16.565 Total : 18833.39 73.57 0.00 0.00 6785.44 3713.71 11796.48 00:28:16.565 Received shutdown signal, test time was about 1.000000 seconds 00:28:16.565 00:28:16.565 Latency(us) 00:28:16.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.565 =================================================================================================================== 00:28:16.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.565 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.565 rmmod nvme_tcp 00:28:16.565 rmmod nvme_fabrics 00:28:16.565 rmmod nvme_keyring 00:28:16.565 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3685468 ']' 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3685468 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3685468 ']' 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3685468 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3685468 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3685468' 00:28:16.822 killing process with pid 3685468 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3685468 00:28:16.822 19:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3685468 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.079 19:00:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.991 19:00:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:18.991 00:28:18.991 real 0m7.439s 00:28:18.991 user 0m12.407s 00:28:18.991 sys 0m2.150s 00:28:18.991 19:00:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:18.991 19:00:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:18.991 ************************************ 00:28:18.991 END TEST nvmf_multicontroller 00:28:18.991 ************************************ 00:28:18.991 19:00:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:18.991 19:00:07 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:18.991 19:00:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:18.991 19:00:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.991 19:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.991 ************************************ 00:28:18.991 START TEST nvmf_aer 00:28:18.991 ************************************ 00:28:18.991 19:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:19.249 * Looking for test storage... 00:28:19.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.249 19:00:07 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.250 19:00:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.157 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:28:21.158 00:28:21.158 --- 10.0.0.2 ping statistics --- 00:28:21.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.158 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:28:21.158 00:28:21.158 --- 10.0.0.1 ping statistics --- 00:28:21.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.158 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3688339 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3688339 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3688339 ']' 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.158 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.158 [2024-07-14 19:00:09.278957] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:21.158 [2024-07-14 19:00:09.279044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.158 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.158 [2024-07-14 19:00:09.344884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.416 [2024-07-14 19:00:09.432230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.416 [2024-07-14 19:00:09.432282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.416 [2024-07-14 19:00:09.432302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.416 [2024-07-14 19:00:09.432313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.416 [2024-07-14 19:00:09.432322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.416 [2024-07-14 19:00:09.432419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.416 [2024-07-14 19:00:09.432471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.416 [2024-07-14 19:00:09.432473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.416 [2024-07-14 19:00:09.432445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 [2024-07-14 19:00:09.571482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 Malloc0 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 [2024-07-14 19:00:09.622700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.416 [ 00:28:21.416 { 00:28:21.416 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:21.416 "subtype": "Discovery", 00:28:21.416 "listen_addresses": [], 00:28:21.416 "allow_any_host": true, 00:28:21.416 "hosts": [] 00:28:21.416 }, 00:28:21.416 { 00:28:21.416 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.416 "subtype": "NVMe", 00:28:21.416 "listen_addresses": [ 00:28:21.416 { 00:28:21.416 "trtype": "TCP", 00:28:21.416 "adrfam": "IPv4", 00:28:21.416 "traddr": "10.0.0.2", 00:28:21.416 "trsvcid": "4420" 00:28:21.416 } 00:28:21.416 ], 00:28:21.416 "allow_any_host": true, 00:28:21.416 "hosts": [], 00:28:21.416 "serial_number": "SPDK00000000000001", 00:28:21.416 "model_number": "SPDK bdev Controller", 00:28:21.416 "max_namespaces": 2, 00:28:21.416 "min_cntlid": 1, 00:28:21.416 "max_cntlid": 65519, 00:28:21.416 "namespaces": [ 00:28:21.416 { 00:28:21.416 "nsid": 1, 00:28:21.416 "bdev_name": "Malloc0", 00:28:21.416 "name": "Malloc0", 00:28:21.416 "nguid": "708D8E452A0B4AD581EC306ED1F49EF3", 00:28:21.416 "uuid": "708d8e45-2a0b-4ad5-81ec-306ed1f49ef3" 00:28:21.416 } 00:28:21.416 ] 00:28:21.416 } 00:28:21.416 ] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3688368 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:21.416 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:21.676 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.676 Malloc1 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.676 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.934 [ 00:28:21.934 { 00:28:21.934 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:21.934 "subtype": "Discovery", 00:28:21.934 "listen_addresses": [], 00:28:21.934 "allow_any_host": true, 00:28:21.934 "hosts": [] 00:28:21.934 }, 00:28:21.934 { 00:28:21.934 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.934 "subtype": "NVMe", 00:28:21.934 "listen_addresses": [ 00:28:21.934 { 00:28:21.934 "trtype": "TCP", 00:28:21.934 "adrfam": "IPv4", 00:28:21.934 "traddr": "10.0.0.2", 00:28:21.934 "trsvcid": "4420" 00:28:21.934 } 00:28:21.934 ], 00:28:21.934 "allow_any_host": true, 00:28:21.934 "hosts": [], 00:28:21.934 "serial_number": "SPDK00000000000001", 00:28:21.934 "model_number": "SPDK bdev Controller", 00:28:21.934 "max_namespaces": 2, 00:28:21.934 "min_cntlid": 1, 00:28:21.934 "max_cntlid": 65519, 00:28:21.934 "namespaces": [ 00:28:21.934 { 00:28:21.934 "nsid": 1, 00:28:21.934 "bdev_name": "Malloc0", 00:28:21.934 "name": "Malloc0", 00:28:21.934 "nguid": "708D8E452A0B4AD581EC306ED1F49EF3", 00:28:21.934 "uuid": "708d8e45-2a0b-4ad5-81ec-306ed1f49ef3" 00:28:21.934 }, 00:28:21.934 { 00:28:21.934 "nsid": 2, 00:28:21.934 "bdev_name": "Malloc1", 00:28:21.934 "name": "Malloc1", 00:28:21.934 "nguid": "0EF0E6A4B3D74B36AFAAE6FE0FCECDC3", 00:28:21.934 "uuid": "0ef0e6a4-b3d7-4b36-afaa-e6fe0fcecdc3" 00:28:21.934 } 00:28:21.934 ] 00:28:21.934 } 00:28:21.934 ] 00:28:21.934 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.934 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3688368 00:28:21.934 Asynchronous Event Request test 00:28:21.934 Attaching to 10.0.0.2 00:28:21.934 Attached to 10.0.0.2 00:28:21.934 Registering asynchronous event callbacks... 00:28:21.934 Starting namespace attribute notice tests for all controllers... 00:28:21.934 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:21.934 aer_cb - Changed Namespace 00:28:21.934 Cleaning up... 00:28:21.934 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.935 19:00:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.935 rmmod nvme_tcp 00:28:21.935 rmmod nvme_fabrics 00:28:21.935 rmmod nvme_keyring 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3688339 ']' 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3688339 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3688339 ']' 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3688339 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3688339 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3688339' 00:28:21.935 killing process with pid 3688339 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3688339 00:28:21.935 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3688339 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.194 19:00:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.763 19:00:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.763 00:28:24.763 real 0m5.190s 00:28:24.763 user 0m4.072s 00:28:24.763 sys 0m1.808s 00:28:24.763 19:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.763 19:00:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:24.763 ************************************ 00:28:24.763 END TEST nvmf_aer 00:28:24.763 ************************************ 00:28:24.763 19:00:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:24.763 19:00:12 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:24.763 19:00:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:24.763 19:00:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.763 19:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.763 ************************************ 00:28:24.763 START TEST nvmf_async_init 00:28:24.763 ************************************ 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:24.763 * Looking for test storage... 00:28:24.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ac7d3a2dbbfc4928adc036c5607051fc 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:24.763 19:00:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:26.660 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:26.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:28:26.661 00:28:26.661 --- 10.0.0.2 ping statistics --- 00:28:26.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.661 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:26.661 00:28:26.661 --- 10.0.0.1 ping statistics --- 00:28:26.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.661 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3690419 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3690419 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3690419 ']' 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:26.661 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.661 [2024-07-14 19:00:14.622859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:26.661 [2024-07-14 19:00:14.622972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.661 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.661 [2024-07-14 19:00:14.691086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.661 [2024-07-14 19:00:14.779591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.661 [2024-07-14 19:00:14.779654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.661 [2024-07-14 19:00:14.779679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.661 [2024-07-14 19:00:14.779693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.661 [2024-07-14 19:00:14.779705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.661 [2024-07-14 19:00:14.779737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 [2024-07-14 19:00:14.927393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 null0 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ac7d3a2dbbfc4928adc036c5607051fc 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 [2024-07-14 19:00:14.967669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.918 19:00:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.175 nvme0n1 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.175 [ 00:28:27.175 { 00:28:27.175 "name": "nvme0n1", 00:28:27.175 "aliases": [ 00:28:27.175 "ac7d3a2d-bbfc-4928-adc0-36c5607051fc" 00:28:27.175 ], 00:28:27.175 "product_name": "NVMe disk", 00:28:27.175 "block_size": 512, 00:28:27.175 "num_blocks": 2097152, 00:28:27.175 "uuid": "ac7d3a2d-bbfc-4928-adc0-36c5607051fc", 00:28:27.175 "assigned_rate_limits": { 00:28:27.175 "rw_ios_per_sec": 0, 00:28:27.175 "rw_mbytes_per_sec": 0, 00:28:27.175 "r_mbytes_per_sec": 0, 00:28:27.175 "w_mbytes_per_sec": 0 00:28:27.175 }, 00:28:27.175 "claimed": false, 00:28:27.175 "zoned": false, 00:28:27.175 "supported_io_types": { 00:28:27.175 "read": true, 00:28:27.175 "write": true, 00:28:27.175 "unmap": false, 00:28:27.175 "flush": true, 00:28:27.175 "reset": true, 00:28:27.175 "nvme_admin": true, 00:28:27.175 "nvme_io": true, 00:28:27.175 "nvme_io_md": false, 00:28:27.175 "write_zeroes": true, 00:28:27.175 "zcopy": false, 00:28:27.175 "get_zone_info": false, 00:28:27.175 "zone_management": false, 00:28:27.175 "zone_append": false, 00:28:27.175 "compare": true, 00:28:27.175 "compare_and_write": true, 00:28:27.175 "abort": true, 00:28:27.175 "seek_hole": false, 00:28:27.175 "seek_data": false, 00:28:27.175 "copy": true, 00:28:27.175 "nvme_iov_md": false 00:28:27.175 }, 00:28:27.175 "memory_domains": [ 00:28:27.175 { 00:28:27.175 "dma_device_id": "system", 00:28:27.175 "dma_device_type": 1 00:28:27.175 } 00:28:27.175 ], 00:28:27.175 "driver_specific": { 00:28:27.175 "nvme": [ 00:28:27.175 { 00:28:27.175 "trid": { 00:28:27.175 "trtype": "TCP", 00:28:27.175 "adrfam": "IPv4", 00:28:27.175 "traddr": "10.0.0.2", 00:28:27.175 "trsvcid": "4420", 00:28:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:27.175 }, 00:28:27.175 "ctrlr_data": { 00:28:27.175 "cntlid": 1, 00:28:27.175 "vendor_id": "0x8086", 00:28:27.175 "model_number": "SPDK bdev Controller", 00:28:27.175 "serial_number": "00000000000000000000", 00:28:27.175 "firmware_revision": "24.09", 00:28:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.175 "oacs": { 00:28:27.175 "security": 0, 00:28:27.175 "format": 0, 00:28:27.175 "firmware": 0, 00:28:27.175 "ns_manage": 0 00:28:27.175 }, 00:28:27.175 "multi_ctrlr": true, 00:28:27.175 "ana_reporting": false 00:28:27.175 }, 00:28:27.175 "vs": { 00:28:27.175 "nvme_version": "1.3" 00:28:27.175 }, 00:28:27.175 "ns_data": { 00:28:27.175 "id": 1, 00:28:27.175 "can_share": true 00:28:27.175 } 00:28:27.175 } 00:28:27.175 ], 00:28:27.175 "mp_policy": "active_passive" 00:28:27.175 } 00:28:27.175 } 00:28:27.175 ] 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.175 [2024-07-14 19:00:15.220762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:27.175 [2024-07-14 19:00:15.220864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e9500 (9): Bad file descriptor 00:28:27.175 [2024-07-14 19:00:15.363023] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.175 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.175 [ 00:28:27.175 { 00:28:27.175 "name": "nvme0n1", 00:28:27.175 "aliases": [ 00:28:27.175 "ac7d3a2d-bbfc-4928-adc0-36c5607051fc" 00:28:27.175 ], 00:28:27.175 "product_name": "NVMe disk", 00:28:27.175 "block_size": 512, 00:28:27.175 "num_blocks": 2097152, 00:28:27.175 "uuid": "ac7d3a2d-bbfc-4928-adc0-36c5607051fc", 00:28:27.175 "assigned_rate_limits": { 00:28:27.175 "rw_ios_per_sec": 0, 00:28:27.175 "rw_mbytes_per_sec": 0, 00:28:27.175 "r_mbytes_per_sec": 0, 00:28:27.175 "w_mbytes_per_sec": 0 00:28:27.175 }, 00:28:27.175 "claimed": false, 00:28:27.175 "zoned": false, 00:28:27.175 "supported_io_types": { 00:28:27.175 "read": true, 00:28:27.175 "write": true, 00:28:27.175 "unmap": false, 00:28:27.175 "flush": true, 00:28:27.175 "reset": true, 00:28:27.175 "nvme_admin": true, 00:28:27.175 "nvme_io": true, 00:28:27.175 "nvme_io_md": false, 00:28:27.175 "write_zeroes": true, 00:28:27.175 "zcopy": false, 00:28:27.175 "get_zone_info": false, 00:28:27.175 "zone_management": false, 00:28:27.175 "zone_append": false, 00:28:27.175 "compare": true, 00:28:27.175 "compare_and_write": true, 00:28:27.175 "abort": true, 00:28:27.175 "seek_hole": false, 00:28:27.175 "seek_data": false, 00:28:27.175 "copy": true, 00:28:27.175 "nvme_iov_md": false 00:28:27.175 }, 00:28:27.175 "memory_domains": [ 00:28:27.175 { 00:28:27.175 "dma_device_id": "system", 00:28:27.175 "dma_device_type": 1 00:28:27.175 } 00:28:27.175 ], 00:28:27.175 "driver_specific": { 00:28:27.175 "nvme": [ 00:28:27.175 { 00:28:27.175 "trid": { 00:28:27.175 "trtype": "TCP", 00:28:27.175 "adrfam": "IPv4", 00:28:27.175 "traddr": "10.0.0.2", 00:28:27.175 "trsvcid": "4420", 00:28:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:27.175 }, 00:28:27.175 "ctrlr_data": { 00:28:27.175 "cntlid": 2, 00:28:27.175 "vendor_id": "0x8086", 00:28:27.175 "model_number": "SPDK bdev Controller", 00:28:27.175 "serial_number": "00000000000000000000", 00:28:27.175 "firmware_revision": "24.09", 00:28:27.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.175 "oacs": { 00:28:27.175 "security": 0, 00:28:27.175 "format": 0, 00:28:27.175 "firmware": 0, 00:28:27.175 "ns_manage": 0 00:28:27.175 }, 00:28:27.175 "multi_ctrlr": true, 00:28:27.175 "ana_reporting": false 00:28:27.175 }, 00:28:27.175 "vs": { 00:28:27.175 "nvme_version": "1.3" 00:28:27.175 }, 00:28:27.175 "ns_data": { 00:28:27.176 "id": 1, 00:28:27.176 "can_share": true 00:28:27.176 } 00:28:27.176 } 00:28:27.176 ], 00:28:27.176 "mp_policy": "active_passive" 00:28:27.176 } 00:28:27.176 } 00:28:27.176 ] 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.176 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rk64N3E1I6 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rk64N3E1I6 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 [2024-07-14 19:00:15.417484] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:27.444 [2024-07-14 19:00:15.417656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rk64N3E1I6 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 [2024-07-14 19:00:15.425491] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rk64N3E1I6 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 [2024-07-14 19:00:15.433514] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:27.444 [2024-07-14 19:00:15.433569] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:27.444 nvme0n1 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 [ 00:28:27.444 { 00:28:27.444 "name": "nvme0n1", 00:28:27.444 "aliases": [ 00:28:27.444 "ac7d3a2d-bbfc-4928-adc0-36c5607051fc" 00:28:27.444 ], 00:28:27.444 "product_name": "NVMe disk", 00:28:27.444 "block_size": 512, 00:28:27.444 "num_blocks": 2097152, 00:28:27.444 "uuid": "ac7d3a2d-bbfc-4928-adc0-36c5607051fc", 00:28:27.444 "assigned_rate_limits": { 00:28:27.444 "rw_ios_per_sec": 0, 00:28:27.444 "rw_mbytes_per_sec": 0, 00:28:27.444 "r_mbytes_per_sec": 0, 00:28:27.444 "w_mbytes_per_sec": 0 00:28:27.444 }, 00:28:27.444 "claimed": false, 00:28:27.444 "zoned": false, 00:28:27.444 "supported_io_types": { 00:28:27.444 "read": true, 00:28:27.444 "write": true, 00:28:27.444 "unmap": false, 00:28:27.444 "flush": true, 00:28:27.444 "reset": true, 00:28:27.444 "nvme_admin": true, 00:28:27.444 "nvme_io": true, 00:28:27.444 "nvme_io_md": false, 00:28:27.444 "write_zeroes": true, 00:28:27.444 "zcopy": false, 00:28:27.444 "get_zone_info": false, 00:28:27.444 "zone_management": false, 00:28:27.444 "zone_append": false, 00:28:27.444 "compare": true, 00:28:27.444 "compare_and_write": true, 00:28:27.444 "abort": true, 00:28:27.444 "seek_hole": false, 00:28:27.444 "seek_data": false, 00:28:27.444 "copy": true, 00:28:27.444 "nvme_iov_md": false 00:28:27.444 }, 00:28:27.444 "memory_domains": [ 00:28:27.444 { 00:28:27.444 "dma_device_id": "system", 00:28:27.444 "dma_device_type": 1 00:28:27.444 } 00:28:27.444 ], 00:28:27.444 "driver_specific": { 00:28:27.444 "nvme": [ 00:28:27.444 { 00:28:27.444 "trid": { 00:28:27.444 "trtype": "TCP", 00:28:27.444 "adrfam": "IPv4", 00:28:27.444 "traddr": "10.0.0.2", 00:28:27.444 "trsvcid": "4421", 00:28:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:27.444 }, 00:28:27.444 "ctrlr_data": { 00:28:27.444 "cntlid": 3, 00:28:27.444 "vendor_id": "0x8086", 00:28:27.444 "model_number": "SPDK bdev Controller", 00:28:27.444 "serial_number": "00000000000000000000", 00:28:27.444 "firmware_revision": "24.09", 00:28:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.444 "oacs": { 00:28:27.444 "security": 0, 00:28:27.444 "format": 0, 00:28:27.444 "firmware": 0, 00:28:27.444 "ns_manage": 0 00:28:27.444 }, 00:28:27.444 "multi_ctrlr": true, 00:28:27.444 "ana_reporting": false 00:28:27.444 }, 00:28:27.444 "vs": { 00:28:27.444 "nvme_version": "1.3" 00:28:27.444 }, 00:28:27.444 "ns_data": { 00:28:27.444 "id": 1, 00:28:27.444 "can_share": true 00:28:27.444 } 00:28:27.444 } 00:28:27.444 ], 00:28:27.444 "mp_policy": "active_passive" 00:28:27.444 } 00:28:27.444 } 00:28:27.444 ] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rk64N3E1I6 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:27.444 rmmod nvme_tcp 00:28:27.444 rmmod nvme_fabrics 00:28:27.444 rmmod nvme_keyring 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3690419 ']' 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3690419 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3690419 ']' 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3690419 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3690419 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3690419' 00:28:27.444 killing process with pid 3690419 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3690419 00:28:27.444 [2024-07-14 19:00:15.608469] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:27.444 [2024-07-14 19:00:15.608503] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:27.444 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3690419 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.709 19:00:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.623 19:00:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:29.623 00:28:29.623 real 0m5.424s 00:28:29.623 user 0m1.998s 00:28:29.623 sys 0m1.795s 00:28:29.623 19:00:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:29.623 19:00:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:29.623 ************************************ 00:28:29.623 END TEST nvmf_async_init 00:28:29.623 ************************************ 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:29.881 19:00:17 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.881 ************************************ 00:28:29.881 START TEST dma 00:28:29.881 ************************************ 00:28:29.881 19:00:17 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:29.881 * Looking for test storage... 00:28:29.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.881 19:00:17 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.881 19:00:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.881 19:00:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.881 19:00:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.881 19:00:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.881 19:00:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.881 19:00:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.881 19:00:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:29.881 19:00:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.881 19:00:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.881 19:00:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:29.881 19:00:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:29.881 00:28:29.881 real 0m0.062s 00:28:29.881 user 0m0.027s 00:28:29.881 sys 0m0.040s 00:28:29.881 19:00:17 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:29.881 19:00:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:29.881 ************************************ 00:28:29.881 END TEST dma 00:28:29.881 ************************************ 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:29.881 19:00:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.881 19:00:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.881 ************************************ 00:28:29.881 START TEST nvmf_identify 00:28:29.881 ************************************ 00:28:29.881 19:00:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:29.881 * Looking for test storage... 00:28:29.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.881 19:00:18 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.882 19:00:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:31.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:31.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:31.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:31.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.779 19:00:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:28:32.036 00:28:32.036 --- 10.0.0.2 ping statistics --- 00:28:32.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.036 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:28:32.036 00:28:32.036 --- 10.0.0.1 ping statistics --- 00:28:32.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.036 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.036 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3692436 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3692436 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3692436 ']' 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.037 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.037 [2024-07-14 19:00:20.169662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:32.037 [2024-07-14 19:00:20.169765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.037 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.037 [2024-07-14 19:00:20.240965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.294 [2024-07-14 19:00:20.338924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.294 [2024-07-14 19:00:20.338993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.294 [2024-07-14 19:00:20.339020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.294 [2024-07-14 19:00:20.339033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.294 [2024-07-14 19:00:20.339044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.294 [2024-07-14 19:00:20.339098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.294 [2024-07-14 19:00:20.339170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.294 [2024-07-14 19:00:20.339225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.294 [2024-07-14 19:00:20.339228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.294 [2024-07-14 19:00:20.473684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.294 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 Malloc0 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 [2024-07-14 19:00:20.555650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.555 [ 00:28:32.555 { 00:28:32.555 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:32.555 "subtype": "Discovery", 00:28:32.555 "listen_addresses": [ 00:28:32.555 { 00:28:32.555 "trtype": "TCP", 00:28:32.555 "adrfam": "IPv4", 00:28:32.555 "traddr": "10.0.0.2", 00:28:32.555 "trsvcid": "4420" 00:28:32.555 } 00:28:32.555 ], 00:28:32.555 "allow_any_host": true, 00:28:32.555 "hosts": [] 00:28:32.555 }, 00:28:32.555 { 00:28:32.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:32.555 "subtype": "NVMe", 00:28:32.555 "listen_addresses": [ 00:28:32.555 { 00:28:32.555 "trtype": "TCP", 00:28:32.555 "adrfam": "IPv4", 00:28:32.555 "traddr": "10.0.0.2", 00:28:32.555 "trsvcid": "4420" 00:28:32.555 } 00:28:32.555 ], 00:28:32.555 "allow_any_host": true, 00:28:32.555 "hosts": [], 00:28:32.555 "serial_number": "SPDK00000000000001", 00:28:32.555 "model_number": "SPDK bdev Controller", 00:28:32.555 "max_namespaces": 32, 00:28:32.555 "min_cntlid": 1, 00:28:32.555 "max_cntlid": 65519, 00:28:32.555 "namespaces": [ 00:28:32.555 { 00:28:32.555 "nsid": 1, 00:28:32.555 "bdev_name": "Malloc0", 00:28:32.555 "name": "Malloc0", 00:28:32.555 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:32.555 "eui64": "ABCDEF0123456789", 00:28:32.555 "uuid": "b7832edb-fd93-4f12-8500-320e0895eb9f" 00:28:32.555 } 00:28:32.555 ] 00:28:32.555 } 00:28:32.555 ] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.555 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:32.555 [2024-07-14 19:00:20.597851] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:32.555 [2024-07-14 19:00:20.597918] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692568 ] 00:28:32.555 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.555 [2024-07-14 19:00:20.633350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:32.555 [2024-07-14 19:00:20.633416] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:32.555 [2024-07-14 19:00:20.633426] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:32.555 [2024-07-14 19:00:20.633441] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:32.555 [2024-07-14 19:00:20.633451] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:32.555 [2024-07-14 19:00:20.636923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:32.555 [2024-07-14 19:00:20.636993] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb96fe0 0 00:28:32.555 [2024-07-14 19:00:20.644890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:32.555 [2024-07-14 19:00:20.644910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:32.555 [2024-07-14 19:00:20.644918] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:32.555 [2024-07-14 19:00:20.644924] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:32.555 [2024-07-14 19:00:20.644974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.555 [2024-07-14 19:00:20.644987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.555 [2024-07-14 19:00:20.644994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.555 [2024-07-14 19:00:20.645010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:32.555 [2024-07-14 19:00:20.645035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.555 [2024-07-14 19:00:20.652895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.555 [2024-07-14 19:00:20.652914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.652921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.652929] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.652948] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:32.556 [2024-07-14 19:00:20.652960] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:32.556 [2024-07-14 19:00:20.652968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:32.556 [2024-07-14 19:00:20.652989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.652998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.653015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.653038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.653185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.653197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.653204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.653220] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:32.556 [2024-07-14 19:00:20.653232] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:32.556 [2024-07-14 19:00:20.653244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.653269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.653289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.653382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.653397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.653404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.653423] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:32.556 [2024-07-14 19:00:20.653439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.653451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.653475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.653496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.653585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.653600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.653607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.653623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.653639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.653665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.653685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.653771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.653786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.653793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.653807] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:32.556 [2024-07-14 19:00:20.653816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.653829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.653939] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:32.556 [2024-07-14 19:00:20.653949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.653962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.653976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.653987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.654008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.654133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.654151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.654159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.654174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:32.556 [2024-07-14 19:00:20.654190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.654215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.654236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.654325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.654340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.654346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.654361] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:32.556 [2024-07-14 19:00:20.654369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:32.556 [2024-07-14 19:00:20.654382] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:32.556 [2024-07-14 19:00:20.654396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:32.556 [2024-07-14 19:00:20.654411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.654429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.556 [2024-07-14 19:00:20.654450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.654613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.556 [2024-07-14 19:00:20.654629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.556 [2024-07-14 19:00:20.654635] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654642] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb96fe0): datao=0, datal=4096, cccid=0 00:28:32.556 [2024-07-14 19:00:20.654650] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfd880) on tqpair(0xb96fe0): expected_datao=0, payload_size=4096 00:28:32.556 [2024-07-14 19:00:20.654658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654669] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654676] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.654698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.654704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.556 [2024-07-14 19:00:20.654722] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:32.556 [2024-07-14 19:00:20.654740] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:32.556 [2024-07-14 19:00:20.654748] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:32.556 [2024-07-14 19:00:20.654757] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:32.556 [2024-07-14 19:00:20.654765] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:32.556 [2024-07-14 19:00:20.654772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:32.556 [2024-07-14 19:00:20.654787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:32.556 [2024-07-14 19:00:20.654799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.556 [2024-07-14 19:00:20.654812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.556 [2024-07-14 19:00:20.654823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:32.556 [2024-07-14 19:00:20.654844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.556 [2024-07-14 19:00:20.654954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.556 [2024-07-14 19:00:20.654969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.556 [2024-07-14 19:00:20.654976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.654983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.654994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.655018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.557 [2024-07-14 19:00:20.655028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.655049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.557 [2024-07-14 19:00:20.655058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.655080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.557 [2024-07-14 19:00:20.655089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.655110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.557 [2024-07-14 19:00:20.655118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:32.557 [2024-07-14 19:00:20.655137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:32.557 [2024-07-14 19:00:20.655168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.655176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.655186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.557 [2024-07-14 19:00:20.655208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfd880, cid 0, qid 0 00:28:32.557 [2024-07-14 19:00:20.655234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfda00, cid 1, qid 0 00:28:32.557 [2024-07-14 19:00:20.655242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdb80, cid 2, qid 0 00:28:32.557 [2024-07-14 19:00:20.655250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.557 [2024-07-14 19:00:20.655257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde80, cid 4, qid 0 00:28:32.557 [2024-07-14 19:00:20.658888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.658905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.658912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.658918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfde80) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.658927] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:32.557 [2024-07-14 19:00:20.658936] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:32.557 [2024-07-14 19:00:20.658954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.658963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.658973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.557 [2024-07-14 19:00:20.658995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde80, cid 4, qid 0 00:28:32.557 [2024-07-14 19:00:20.659137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.557 [2024-07-14 19:00:20.659152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.557 [2024-07-14 19:00:20.659158] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659165] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb96fe0): datao=0, datal=4096, cccid=4 00:28:32.557 [2024-07-14 19:00:20.659172] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfde80) on tqpair(0xb96fe0): expected_datao=0, payload_size=4096 00:28:32.557 [2024-07-14 19:00:20.659180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659190] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659197] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.659218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.659224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfde80) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.659249] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:32.557 [2024-07-14 19:00:20.659286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.659307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.557 [2024-07-14 19:00:20.659325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.659348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.557 [2024-07-14 19:00:20.659374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde80, cid 4, qid 0 00:28:32.557 [2024-07-14 19:00:20.659386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe000, cid 5, qid 0 00:28:32.557 [2024-07-14 19:00:20.659519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.557 [2024-07-14 19:00:20.659533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.557 [2024-07-14 19:00:20.659540] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659547] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb96fe0): datao=0, datal=1024, cccid=4 00:28:32.557 [2024-07-14 19:00:20.659554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfde80) on tqpair(0xb96fe0): expected_datao=0, payload_size=1024 00:28:32.557 [2024-07-14 19:00:20.659562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659571] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659578] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.659596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.659602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.659609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfe000) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.700021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.700040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.700047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfde80) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.700072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.700092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.557 [2024-07-14 19:00:20.700121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde80, cid 4, qid 0 00:28:32.557 [2024-07-14 19:00:20.700242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.557 [2024-07-14 19:00:20.700255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.557 [2024-07-14 19:00:20.700261] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700268] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb96fe0): datao=0, datal=3072, cccid=4 00:28:32.557 [2024-07-14 19:00:20.700276] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfde80) on tqpair(0xb96fe0): expected_datao=0, payload_size=3072 00:28:32.557 [2024-07-14 19:00:20.700283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700293] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.700320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.700327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfde80) on tqpair=0xb96fe0 00:28:32.557 [2024-07-14 19:00:20.700354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb96fe0) 00:28:32.557 [2024-07-14 19:00:20.700373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.557 [2024-07-14 19:00:20.700400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde80, cid 4, qid 0 00:28:32.557 [2024-07-14 19:00:20.700510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.557 [2024-07-14 19:00:20.700522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.557 [2024-07-14 19:00:20.700529] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700535] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb96fe0): datao=0, datal=8, cccid=4 00:28:32.557 [2024-07-14 19:00:20.700543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfde80) on tqpair(0xb96fe0): expected_datao=0, payload_size=8 00:28:32.557 [2024-07-14 19:00:20.700550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700559] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.700566] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.742025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.557 [2024-07-14 19:00:20.742044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.557 [2024-07-14 19:00:20.742051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.557 [2024-07-14 19:00:20.742058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfde80) on tqpair=0xb96fe0 00:28:32.557 ===================================================== 00:28:32.557 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:32.557 ===================================================== 00:28:32.557 Controller Capabilities/Features 00:28:32.557 ================================ 00:28:32.557 Vendor ID: 0000 00:28:32.558 Subsystem Vendor ID: 0000 00:28:32.558 Serial Number: .................... 00:28:32.558 Model Number: ........................................ 00:28:32.558 Firmware Version: 24.09 00:28:32.558 Recommended Arb Burst: 0 00:28:32.558 IEEE OUI Identifier: 00 00 00 00:28:32.558 Multi-path I/O 00:28:32.558 May have multiple subsystem ports: No 00:28:32.558 May have multiple controllers: No 00:28:32.558 Associated with SR-IOV VF: No 00:28:32.558 Max Data Transfer Size: 131072 00:28:32.558 Max Number of Namespaces: 0 00:28:32.558 Max Number of I/O Queues: 1024 00:28:32.558 NVMe Specification Version (VS): 1.3 00:28:32.558 NVMe Specification Version (Identify): 1.3 00:28:32.558 Maximum Queue Entries: 128 00:28:32.558 Contiguous Queues Required: Yes 00:28:32.558 Arbitration Mechanisms Supported 00:28:32.558 Weighted Round Robin: Not Supported 00:28:32.558 Vendor Specific: Not Supported 00:28:32.558 Reset Timeout: 15000 ms 00:28:32.558 Doorbell Stride: 4 bytes 00:28:32.558 NVM Subsystem Reset: Not Supported 00:28:32.558 Command Sets Supported 00:28:32.558 NVM Command Set: Supported 00:28:32.558 Boot Partition: Not Supported 00:28:32.558 Memory Page Size Minimum: 4096 bytes 00:28:32.558 Memory Page Size Maximum: 4096 bytes 00:28:32.558 Persistent Memory Region: Not Supported 00:28:32.558 Optional Asynchronous Events Supported 00:28:32.558 Namespace Attribute Notices: Not Supported 00:28:32.558 Firmware Activation Notices: Not Supported 00:28:32.558 ANA Change Notices: Not Supported 00:28:32.558 PLE Aggregate Log Change Notices: Not Supported 00:28:32.558 LBA Status Info Alert Notices: Not Supported 00:28:32.558 EGE Aggregate Log Change Notices: Not Supported 00:28:32.558 Normal NVM Subsystem Shutdown event: Not Supported 00:28:32.558 Zone Descriptor Change Notices: Not Supported 00:28:32.558 Discovery Log Change Notices: Supported 00:28:32.558 Controller Attributes 00:28:32.558 128-bit Host Identifier: Not Supported 00:28:32.558 Non-Operational Permissive Mode: Not Supported 00:28:32.558 NVM Sets: Not Supported 00:28:32.558 Read Recovery Levels: Not Supported 00:28:32.558 Endurance Groups: Not Supported 00:28:32.558 Predictable Latency Mode: Not Supported 00:28:32.558 Traffic Based Keep ALive: Not Supported 00:28:32.558 Namespace Granularity: Not Supported 00:28:32.558 SQ Associations: Not Supported 00:28:32.558 UUID List: Not Supported 00:28:32.558 Multi-Domain Subsystem: Not Supported 00:28:32.558 Fixed Capacity Management: Not Supported 00:28:32.558 Variable Capacity Management: Not Supported 00:28:32.558 Delete Endurance Group: Not Supported 00:28:32.558 Delete NVM Set: Not Supported 00:28:32.558 Extended LBA Formats Supported: Not Supported 00:28:32.558 Flexible Data Placement Supported: Not Supported 00:28:32.558 00:28:32.558 Controller Memory Buffer Support 00:28:32.558 ================================ 00:28:32.558 Supported: No 00:28:32.558 00:28:32.558 Persistent Memory Region Support 00:28:32.558 ================================ 00:28:32.558 Supported: No 00:28:32.558 00:28:32.558 Admin Command Set Attributes 00:28:32.558 ============================ 00:28:32.558 Security Send/Receive: Not Supported 00:28:32.558 Format NVM: Not Supported 00:28:32.558 Firmware Activate/Download: Not Supported 00:28:32.558 Namespace Management: Not Supported 00:28:32.558 Device Self-Test: Not Supported 00:28:32.558 Directives: Not Supported 00:28:32.558 NVMe-MI: Not Supported 00:28:32.558 Virtualization Management: Not Supported 00:28:32.558 Doorbell Buffer Config: Not Supported 00:28:32.558 Get LBA Status Capability: Not Supported 00:28:32.558 Command & Feature Lockdown Capability: Not Supported 00:28:32.558 Abort Command Limit: 1 00:28:32.558 Async Event Request Limit: 4 00:28:32.558 Number of Firmware Slots: N/A 00:28:32.558 Firmware Slot 1 Read-Only: N/A 00:28:32.558 Firmware Activation Without Reset: N/A 00:28:32.558 Multiple Update Detection Support: N/A 00:28:32.558 Firmware Update Granularity: No Information Provided 00:28:32.558 Per-Namespace SMART Log: No 00:28:32.558 Asymmetric Namespace Access Log Page: Not Supported 00:28:32.558 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:32.558 Command Effects Log Page: Not Supported 00:28:32.558 Get Log Page Extended Data: Supported 00:28:32.558 Telemetry Log Pages: Not Supported 00:28:32.558 Persistent Event Log Pages: Not Supported 00:28:32.558 Supported Log Pages Log Page: May Support 00:28:32.558 Commands Supported & Effects Log Page: Not Supported 00:28:32.558 Feature Identifiers & Effects Log Page:May Support 00:28:32.558 NVMe-MI Commands & Effects Log Page: May Support 00:28:32.558 Data Area 4 for Telemetry Log: Not Supported 00:28:32.558 Error Log Page Entries Supported: 128 00:28:32.558 Keep Alive: Not Supported 00:28:32.558 00:28:32.558 NVM Command Set Attributes 00:28:32.558 ========================== 00:28:32.558 Submission Queue Entry Size 00:28:32.558 Max: 1 00:28:32.558 Min: 1 00:28:32.558 Completion Queue Entry Size 00:28:32.558 Max: 1 00:28:32.558 Min: 1 00:28:32.558 Number of Namespaces: 0 00:28:32.558 Compare Command: Not Supported 00:28:32.558 Write Uncorrectable Command: Not Supported 00:28:32.558 Dataset Management Command: Not Supported 00:28:32.558 Write Zeroes Command: Not Supported 00:28:32.558 Set Features Save Field: Not Supported 00:28:32.558 Reservations: Not Supported 00:28:32.558 Timestamp: Not Supported 00:28:32.558 Copy: Not Supported 00:28:32.558 Volatile Write Cache: Not Present 00:28:32.558 Atomic Write Unit (Normal): 1 00:28:32.558 Atomic Write Unit (PFail): 1 00:28:32.558 Atomic Compare & Write Unit: 1 00:28:32.558 Fused Compare & Write: Supported 00:28:32.558 Scatter-Gather List 00:28:32.558 SGL Command Set: Supported 00:28:32.558 SGL Keyed: Supported 00:28:32.558 SGL Bit Bucket Descriptor: Not Supported 00:28:32.558 SGL Metadata Pointer: Not Supported 00:28:32.558 Oversized SGL: Not Supported 00:28:32.558 SGL Metadata Address: Not Supported 00:28:32.558 SGL Offset: Supported 00:28:32.558 Transport SGL Data Block: Not Supported 00:28:32.558 Replay Protected Memory Block: Not Supported 00:28:32.558 00:28:32.558 Firmware Slot Information 00:28:32.558 ========================= 00:28:32.558 Active slot: 0 00:28:32.558 00:28:32.558 00:28:32.558 Error Log 00:28:32.558 ========= 00:28:32.558 00:28:32.558 Active Namespaces 00:28:32.558 ================= 00:28:32.558 Discovery Log Page 00:28:32.558 ================== 00:28:32.558 Generation Counter: 2 00:28:32.558 Number of Records: 2 00:28:32.558 Record Format: 0 00:28:32.558 00:28:32.558 Discovery Log Entry 0 00:28:32.558 ---------------------- 00:28:32.558 Transport Type: 3 (TCP) 00:28:32.558 Address Family: 1 (IPv4) 00:28:32.558 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:32.558 Entry Flags: 00:28:32.558 Duplicate Returned Information: 1 00:28:32.558 Explicit Persistent Connection Support for Discovery: 1 00:28:32.558 Transport Requirements: 00:28:32.558 Secure Channel: Not Required 00:28:32.558 Port ID: 0 (0x0000) 00:28:32.558 Controller ID: 65535 (0xffff) 00:28:32.558 Admin Max SQ Size: 128 00:28:32.558 Transport Service Identifier: 4420 00:28:32.558 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:32.558 Transport Address: 10.0.0.2 00:28:32.558 Discovery Log Entry 1 00:28:32.558 ---------------------- 00:28:32.558 Transport Type: 3 (TCP) 00:28:32.558 Address Family: 1 (IPv4) 00:28:32.558 Subsystem Type: 2 (NVM Subsystem) 00:28:32.558 Entry Flags: 00:28:32.558 Duplicate Returned Information: 0 00:28:32.558 Explicit Persistent Connection Support for Discovery: 0 00:28:32.558 Transport Requirements: 00:28:32.558 Secure Channel: Not Required 00:28:32.558 Port ID: 0 (0x0000) 00:28:32.558 Controller ID: 65535 (0xffff) 00:28:32.558 Admin Max SQ Size: 128 00:28:32.558 Transport Service Identifier: 4420 00:28:32.558 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:32.558 Transport Address: 10.0.0.2 [2024-07-14 19:00:20.742167] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:32.558 [2024-07-14 19:00:20.742189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfd880) on tqpair=0xb96fe0 00:28:32.558 [2024-07-14 19:00:20.742201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.558 [2024-07-14 19:00:20.742210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfda00) on tqpair=0xb96fe0 00:28:32.558 [2024-07-14 19:00:20.742218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.558 [2024-07-14 19:00:20.742226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdb80) on tqpair=0xb96fe0 00:28:32.558 [2024-07-14 19:00:20.742233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.558 [2024-07-14 19:00:20.742241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.558 [2024-07-14 19:00:20.742248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.558 [2024-07-14 19:00:20.742266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.558 [2024-07-14 19:00:20.742275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.558 [2024-07-14 19:00:20.742281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.742292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.742335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.742498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.742511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.742518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.742541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.742565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.742592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.742703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.742718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.742724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.742739] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:32.559 [2024-07-14 19:00:20.742748] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:32.559 [2024-07-14 19:00:20.742764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.742790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.742810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.742906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.742920] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.742927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.742950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.742966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.742976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.742997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.743084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.743099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.743105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.743128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.743154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.743175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.743257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.743276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.743284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.743307] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.743333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.743354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.743441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.743455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.743462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.743485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.743510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.743531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.743620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.743633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.743639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.743662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.743688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.743707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.743794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.743808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.743815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.743838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.743853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb96fe0) 00:28:32.559 [2024-07-14 19:00:20.743864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.559 [2024-07-14 19:00:20.747892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdd00, cid 3, qid 0 00:28:32.559 [2024-07-14 19:00:20.748053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.559 [2024-07-14 19:00:20.748069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.559 [2024-07-14 19:00:20.748082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.559 [2024-07-14 19:00:20.748090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbfdd00) on tqpair=0xb96fe0 00:28:32.559 [2024-07-14 19:00:20.748104] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:32.559 00:28:32.560 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:32.844 [2024-07-14 19:00:20.783545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:32.844 [2024-07-14 19:00:20.783590] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692570 ] 00:28:32.844 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.844 [2024-07-14 19:00:20.818690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:32.844 [2024-07-14 19:00:20.818743] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:32.844 [2024-07-14 19:00:20.818753] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:32.844 [2024-07-14 19:00:20.818766] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:32.844 [2024-07-14 19:00:20.818775] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:32.844 [2024-07-14 19:00:20.819043] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:32.844 [2024-07-14 19:00:20.819083] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1246fe0 0 00:28:32.844 [2024-07-14 19:00:20.825895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:32.844 [2024-07-14 19:00:20.825915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:32.844 [2024-07-14 19:00:20.825923] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:32.844 [2024-07-14 19:00:20.825929] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:32.844 [2024-07-14 19:00:20.825968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.825980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.825987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.844 [2024-07-14 19:00:20.826001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:32.844 [2024-07-14 19:00:20.826028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.844 [2024-07-14 19:00:20.833892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.844 [2024-07-14 19:00:20.833910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.844 [2024-07-14 19:00:20.833917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.833925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.844 [2024-07-14 19:00:20.833938] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:32.844 [2024-07-14 19:00:20.833948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:32.844 [2024-07-14 19:00:20.833958] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:32.844 [2024-07-14 19:00:20.833976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.833989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.833997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.844 [2024-07-14 19:00:20.834008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.844 [2024-07-14 19:00:20.834032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.844 [2024-07-14 19:00:20.834164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.844 [2024-07-14 19:00:20.834179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.844 [2024-07-14 19:00:20.834186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.844 [2024-07-14 19:00:20.834193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.844 [2024-07-14 19:00:20.834201] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:32.844 [2024-07-14 19:00:20.834214] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:32.844 [2024-07-14 19:00:20.834227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.834251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.834272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.834363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.834378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.834384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.834400] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:32.845 [2024-07-14 19:00:20.834414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.834426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.834451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.834472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.834562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.834577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.834584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.834599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.834615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.834641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.834666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.834757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.834769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.834776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.834790] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:32.845 [2024-07-14 19:00:20.834799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.834811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.834921] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:32.845 [2024-07-14 19:00:20.834931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.834943] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.834957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.834967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.834988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.835104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.835119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.835126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.835141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:32.845 [2024-07-14 19:00:20.835157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.835184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.835204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.835295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.835309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.835316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.835330] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:32.845 [2024-07-14 19:00:20.835339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:32.845 [2024-07-14 19:00:20.835352] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:32.845 [2024-07-14 19:00:20.835366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:32.845 [2024-07-14 19:00:20.835383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.835402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.845 [2024-07-14 19:00:20.835423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.835541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.845 [2024-07-14 19:00:20.835554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.845 [2024-07-14 19:00:20.835561] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835567] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=4096, cccid=0 00:28:32.845 [2024-07-14 19:00:20.835575] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ad880) on tqpair(0x1246fe0): expected_datao=0, payload_size=4096 00:28:32.845 [2024-07-14 19:00:20.835583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835608] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.835635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.835642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.835659] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:32.845 [2024-07-14 19:00:20.835672] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:32.845 [2024-07-14 19:00:20.835680] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:32.845 [2024-07-14 19:00:20.835687] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:32.845 [2024-07-14 19:00:20.835695] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:32.845 [2024-07-14 19:00:20.835703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:32.845 [2024-07-14 19:00:20.835716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:32.845 [2024-07-14 19:00:20.835728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.835753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:32.845 [2024-07-14 19:00:20.835774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.845 [2024-07-14 19:00:20.835869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.845 [2024-07-14 19:00:20.835893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.845 [2024-07-14 19:00:20.835901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.845 [2024-07-14 19:00:20.835918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.835945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.845 [2024-07-14 19:00:20.835956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.835978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.845 [2024-07-14 19:00:20.835987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.835994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.845 [2024-07-14 19:00:20.836000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1246fe0) 00:28:32.845 [2024-07-14 19:00:20.836009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.845 [2024-07-14 19:00:20.836018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.836040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.846 [2024-07-14 19:00:20.836048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.836098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.846 [2024-07-14 19:00:20.836121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ad880, cid 0, qid 0 00:28:32.846 [2024-07-14 19:00:20.836132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ada00, cid 1, qid 0 00:28:32.846 [2024-07-14 19:00:20.836140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12adb80, cid 2, qid 0 00:28:32.846 [2024-07-14 19:00:20.836148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.846 [2024-07-14 19:00:20.836156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.836298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.846 [2024-07-14 19:00:20.836311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.846 [2024-07-14 19:00:20.836318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.846 [2024-07-14 19:00:20.836333] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:32.846 [2024-07-14 19:00:20.836341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.836404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:32.846 [2024-07-14 19:00:20.836425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.836612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.846 [2024-07-14 19:00:20.836626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.846 [2024-07-14 19:00:20.836632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.846 [2024-07-14 19:00:20.836703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.836734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.836753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.846 [2024-07-14 19:00:20.836774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.836889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.846 [2024-07-14 19:00:20.836905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.846 [2024-07-14 19:00:20.836912] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=4096, cccid=4 00:28:32.846 [2024-07-14 19:00:20.836926] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ade80) on tqpair(0x1246fe0): expected_datao=0, payload_size=4096 00:28:32.846 [2024-07-14 19:00:20.836934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836944] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836951] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.846 [2024-07-14 19:00:20.836972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.846 [2024-07-14 19:00:20.836979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.836986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.846 [2024-07-14 19:00:20.837001] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:32.846 [2024-07-14 19:00:20.837021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.837039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.837053] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.837060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.837071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.846 [2024-07-14 19:00:20.837098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.837220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.846 [2024-07-14 19:00:20.837236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.846 [2024-07-14 19:00:20.837242] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.837249] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=4096, cccid=4 00:28:32.846 [2024-07-14 19:00:20.837256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ade80) on tqpair(0x1246fe0): expected_datao=0, payload_size=4096 00:28:32.846 [2024-07-14 19:00:20.837264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.837281] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.837290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.879902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.846 [2024-07-14 19:00:20.879921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.846 [2024-07-14 19:00:20.879929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.879936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.846 [2024-07-14 19:00:20.879957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.879977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.879992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.880000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.880011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.846 [2024-07-14 19:00:20.880034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.880177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.846 [2024-07-14 19:00:20.880190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.846 [2024-07-14 19:00:20.880197] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.880203] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=4096, cccid=4 00:28:32.846 [2024-07-14 19:00:20.880211] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ade80) on tqpair(0x1246fe0): expected_datao=0, payload_size=4096 00:28:32.846 [2024-07-14 19:00:20.880219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.880235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.880244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.924905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.846 [2024-07-14 19:00:20.924923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.846 [2024-07-14 19:00:20.924930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.924937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.846 [2024-07-14 19:00:20.924964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.924980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.924996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.925006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.925019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.925028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.925036] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:32.846 [2024-07-14 19:00:20.925044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:32.846 [2024-07-14 19:00:20.925052] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:32.846 [2024-07-14 19:00:20.925071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.925080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.925092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.846 [2024-07-14 19:00:20.925103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.925110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.846 [2024-07-14 19:00:20.925117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1246fe0) 00:28:32.846 [2024-07-14 19:00:20.925126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:32.846 [2024-07-14 19:00:20.925152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.846 [2024-07-14 19:00:20.925165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae000, cid 5, qid 0 00:28:32.846 [2024-07-14 19:00:20.925266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.925279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.925286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.925303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.925312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.925319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae000) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.925341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae000, cid 5, qid 0 00:28:32.847 [2024-07-14 19:00:20.925473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.925488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.925495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae000) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.925518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae000, cid 5, qid 0 00:28:32.847 [2024-07-14 19:00:20.925653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.925668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.925675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae000) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.925698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae000, cid 5, qid 0 00:28:32.847 [2024-07-14 19:00:20.925826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.925839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.925846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae000) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.925883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.925974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.925981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1246fe0) 00:28:32.847 [2024-07-14 19:00:20.925990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.847 [2024-07-14 19:00:20.926012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae000, cid 5, qid 0 00:28:32.847 [2024-07-14 19:00:20.926024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ade80, cid 4, qid 0 00:28:32.847 [2024-07-14 19:00:20.926031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae180, cid 6, qid 0 00:28:32.847 [2024-07-14 19:00:20.926039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae300, cid 7, qid 0 00:28:32.847 [2024-07-14 19:00:20.926240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.847 [2024-07-14 19:00:20.926253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.847 [2024-07-14 19:00:20.926260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926267] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=8192, cccid=5 00:28:32.847 [2024-07-14 19:00:20.926278] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ae000) on tqpair(0x1246fe0): expected_datao=0, payload_size=8192 00:28:32.847 [2024-07-14 19:00:20.926286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926317] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.847 [2024-07-14 19:00:20.926335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.847 [2024-07-14 19:00:20.926341] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926348] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=512, cccid=4 00:28:32.847 [2024-07-14 19:00:20.926355] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ade80) on tqpair(0x1246fe0): expected_datao=0, payload_size=512 00:28:32.847 [2024-07-14 19:00:20.926363] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926372] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926379] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.847 [2024-07-14 19:00:20.926396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.847 [2024-07-14 19:00:20.926402] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926409] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=512, cccid=6 00:28:32.847 [2024-07-14 19:00:20.926416] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ae180) on tqpair(0x1246fe0): expected_datao=0, payload_size=512 00:28:32.847 [2024-07-14 19:00:20.926423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926432] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926440] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:32.847 [2024-07-14 19:00:20.926457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:32.847 [2024-07-14 19:00:20.926463] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926469] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1246fe0): datao=0, datal=4096, cccid=7 00:28:32.847 [2024-07-14 19:00:20.926477] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ae300) on tqpair(0x1246fe0): expected_datao=0, payload_size=4096 00:28:32.847 [2024-07-14 19:00:20.926484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926493] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926501] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.926521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.926528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae000) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.926567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.926578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.926585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ade80) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.926605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.926630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.926636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae180) on tqpair=0x1246fe0 00:28:32.847 [2024-07-14 19:00:20.926656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.847 [2024-07-14 19:00:20.926665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.847 [2024-07-14 19:00:20.926671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.847 [2024-07-14 19:00:20.926677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae300) on tqpair=0x1246fe0 00:28:32.847 ===================================================== 00:28:32.847 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.847 ===================================================== 00:28:32.847 Controller Capabilities/Features 00:28:32.847 ================================ 00:28:32.847 Vendor ID: 8086 00:28:32.847 Subsystem Vendor ID: 8086 00:28:32.847 Serial Number: SPDK00000000000001 00:28:32.847 Model Number: SPDK bdev Controller 00:28:32.847 Firmware Version: 24.09 00:28:32.847 Recommended Arb Burst: 6 00:28:32.847 IEEE OUI Identifier: e4 d2 5c 00:28:32.847 Multi-path I/O 00:28:32.847 May have multiple subsystem ports: Yes 00:28:32.847 May have multiple controllers: Yes 00:28:32.847 Associated with SR-IOV VF: No 00:28:32.847 Max Data Transfer Size: 131072 00:28:32.847 Max Number of Namespaces: 32 00:28:32.847 Max Number of I/O Queues: 127 00:28:32.847 NVMe Specification Version (VS): 1.3 00:28:32.847 NVMe Specification Version (Identify): 1.3 00:28:32.847 Maximum Queue Entries: 128 00:28:32.847 Contiguous Queues Required: Yes 00:28:32.847 Arbitration Mechanisms Supported 00:28:32.847 Weighted Round Robin: Not Supported 00:28:32.847 Vendor Specific: Not Supported 00:28:32.848 Reset Timeout: 15000 ms 00:28:32.848 Doorbell Stride: 4 bytes 00:28:32.848 NVM Subsystem Reset: Not Supported 00:28:32.848 Command Sets Supported 00:28:32.848 NVM Command Set: Supported 00:28:32.848 Boot Partition: Not Supported 00:28:32.848 Memory Page Size Minimum: 4096 bytes 00:28:32.848 Memory Page Size Maximum: 4096 bytes 00:28:32.848 Persistent Memory Region: Not Supported 00:28:32.848 Optional Asynchronous Events Supported 00:28:32.848 Namespace Attribute Notices: Supported 00:28:32.848 Firmware Activation Notices: Not Supported 00:28:32.848 ANA Change Notices: Not Supported 00:28:32.848 PLE Aggregate Log Change Notices: Not Supported 00:28:32.848 LBA Status Info Alert Notices: Not Supported 00:28:32.848 EGE Aggregate Log Change Notices: Not Supported 00:28:32.848 Normal NVM Subsystem Shutdown event: Not Supported 00:28:32.848 Zone Descriptor Change Notices: Not Supported 00:28:32.848 Discovery Log Change Notices: Not Supported 00:28:32.848 Controller Attributes 00:28:32.848 128-bit Host Identifier: Supported 00:28:32.848 Non-Operational Permissive Mode: Not Supported 00:28:32.848 NVM Sets: Not Supported 00:28:32.848 Read Recovery Levels: Not Supported 00:28:32.848 Endurance Groups: Not Supported 00:28:32.848 Predictable Latency Mode: Not Supported 00:28:32.848 Traffic Based Keep ALive: Not Supported 00:28:32.848 Namespace Granularity: Not Supported 00:28:32.848 SQ Associations: Not Supported 00:28:32.848 UUID List: Not Supported 00:28:32.848 Multi-Domain Subsystem: Not Supported 00:28:32.848 Fixed Capacity Management: Not Supported 00:28:32.848 Variable Capacity Management: Not Supported 00:28:32.848 Delete Endurance Group: Not Supported 00:28:32.848 Delete NVM Set: Not Supported 00:28:32.848 Extended LBA Formats Supported: Not Supported 00:28:32.848 Flexible Data Placement Supported: Not Supported 00:28:32.848 00:28:32.848 Controller Memory Buffer Support 00:28:32.848 ================================ 00:28:32.848 Supported: No 00:28:32.848 00:28:32.848 Persistent Memory Region Support 00:28:32.848 ================================ 00:28:32.848 Supported: No 00:28:32.848 00:28:32.848 Admin Command Set Attributes 00:28:32.848 ============================ 00:28:32.848 Security Send/Receive: Not Supported 00:28:32.848 Format NVM: Not Supported 00:28:32.848 Firmware Activate/Download: Not Supported 00:28:32.848 Namespace Management: Not Supported 00:28:32.848 Device Self-Test: Not Supported 00:28:32.848 Directives: Not Supported 00:28:32.848 NVMe-MI: Not Supported 00:28:32.848 Virtualization Management: Not Supported 00:28:32.848 Doorbell Buffer Config: Not Supported 00:28:32.848 Get LBA Status Capability: Not Supported 00:28:32.848 Command & Feature Lockdown Capability: Not Supported 00:28:32.848 Abort Command Limit: 4 00:28:32.848 Async Event Request Limit: 4 00:28:32.848 Number of Firmware Slots: N/A 00:28:32.848 Firmware Slot 1 Read-Only: N/A 00:28:32.848 Firmware Activation Without Reset: N/A 00:28:32.848 Multiple Update Detection Support: N/A 00:28:32.848 Firmware Update Granularity: No Information Provided 00:28:32.848 Per-Namespace SMART Log: No 00:28:32.848 Asymmetric Namespace Access Log Page: Not Supported 00:28:32.848 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:32.848 Command Effects Log Page: Supported 00:28:32.848 Get Log Page Extended Data: Supported 00:28:32.848 Telemetry Log Pages: Not Supported 00:28:32.848 Persistent Event Log Pages: Not Supported 00:28:32.848 Supported Log Pages Log Page: May Support 00:28:32.848 Commands Supported & Effects Log Page: Not Supported 00:28:32.848 Feature Identifiers & Effects Log Page:May Support 00:28:32.848 NVMe-MI Commands & Effects Log Page: May Support 00:28:32.848 Data Area 4 for Telemetry Log: Not Supported 00:28:32.848 Error Log Page Entries Supported: 128 00:28:32.848 Keep Alive: Supported 00:28:32.848 Keep Alive Granularity: 10000 ms 00:28:32.848 00:28:32.848 NVM Command Set Attributes 00:28:32.848 ========================== 00:28:32.848 Submission Queue Entry Size 00:28:32.848 Max: 64 00:28:32.848 Min: 64 00:28:32.848 Completion Queue Entry Size 00:28:32.848 Max: 16 00:28:32.848 Min: 16 00:28:32.848 Number of Namespaces: 32 00:28:32.848 Compare Command: Supported 00:28:32.848 Write Uncorrectable Command: Not Supported 00:28:32.848 Dataset Management Command: Supported 00:28:32.848 Write Zeroes Command: Supported 00:28:32.848 Set Features Save Field: Not Supported 00:28:32.848 Reservations: Supported 00:28:32.848 Timestamp: Not Supported 00:28:32.848 Copy: Supported 00:28:32.848 Volatile Write Cache: Present 00:28:32.848 Atomic Write Unit (Normal): 1 00:28:32.848 Atomic Write Unit (PFail): 1 00:28:32.848 Atomic Compare & Write Unit: 1 00:28:32.848 Fused Compare & Write: Supported 00:28:32.848 Scatter-Gather List 00:28:32.848 SGL Command Set: Supported 00:28:32.848 SGL Keyed: Supported 00:28:32.848 SGL Bit Bucket Descriptor: Not Supported 00:28:32.848 SGL Metadata Pointer: Not Supported 00:28:32.848 Oversized SGL: Not Supported 00:28:32.848 SGL Metadata Address: Not Supported 00:28:32.848 SGL Offset: Supported 00:28:32.848 Transport SGL Data Block: Not Supported 00:28:32.848 Replay Protected Memory Block: Not Supported 00:28:32.848 00:28:32.848 Firmware Slot Information 00:28:32.848 ========================= 00:28:32.848 Active slot: 1 00:28:32.848 Slot 1 Firmware Revision: 24.09 00:28:32.848 00:28:32.848 00:28:32.848 Commands Supported and Effects 00:28:32.848 ============================== 00:28:32.848 Admin Commands 00:28:32.848 -------------- 00:28:32.848 Get Log Page (02h): Supported 00:28:32.848 Identify (06h): Supported 00:28:32.848 Abort (08h): Supported 00:28:32.848 Set Features (09h): Supported 00:28:32.848 Get Features (0Ah): Supported 00:28:32.848 Asynchronous Event Request (0Ch): Supported 00:28:32.848 Keep Alive (18h): Supported 00:28:32.848 I/O Commands 00:28:32.848 ------------ 00:28:32.848 Flush (00h): Supported LBA-Change 00:28:32.848 Write (01h): Supported LBA-Change 00:28:32.848 Read (02h): Supported 00:28:32.848 Compare (05h): Supported 00:28:32.848 Write Zeroes (08h): Supported LBA-Change 00:28:32.848 Dataset Management (09h): Supported LBA-Change 00:28:32.848 Copy (19h): Supported LBA-Change 00:28:32.848 00:28:32.848 Error Log 00:28:32.848 ========= 00:28:32.848 00:28:32.848 Arbitration 00:28:32.848 =========== 00:28:32.848 Arbitration Burst: 1 00:28:32.848 00:28:32.848 Power Management 00:28:32.848 ================ 00:28:32.848 Number of Power States: 1 00:28:32.848 Current Power State: Power State #0 00:28:32.848 Power State #0: 00:28:32.848 Max Power: 0.00 W 00:28:32.848 Non-Operational State: Operational 00:28:32.848 Entry Latency: Not Reported 00:28:32.848 Exit Latency: Not Reported 00:28:32.848 Relative Read Throughput: 0 00:28:32.848 Relative Read Latency: 0 00:28:32.848 Relative Write Throughput: 0 00:28:32.848 Relative Write Latency: 0 00:28:32.848 Idle Power: Not Reported 00:28:32.848 Active Power: Not Reported 00:28:32.848 Non-Operational Permissive Mode: Not Supported 00:28:32.848 00:28:32.848 Health Information 00:28:32.848 ================== 00:28:32.848 Critical Warnings: 00:28:32.848 Available Spare Space: OK 00:28:32.848 Temperature: OK 00:28:32.848 Device Reliability: OK 00:28:32.848 Read Only: No 00:28:32.848 Volatile Memory Backup: OK 00:28:32.848 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:32.848 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:32.848 Available Spare: 0% 00:28:32.848 Available Spare Threshold: 0% 00:28:32.848 Life Percentage Used:[2024-07-14 19:00:20.926801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.848 [2024-07-14 19:00:20.926813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1246fe0) 00:28:32.848 [2024-07-14 19:00:20.926824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.848 [2024-07-14 19:00:20.926846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ae300, cid 7, qid 0 00:28:32.848 [2024-07-14 19:00:20.926989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.848 [2024-07-14 19:00:20.927004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.848 [2024-07-14 19:00:20.927011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.848 [2024-07-14 19:00:20.927018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ae300) on tqpair=0x1246fe0 00:28:32.848 [2024-07-14 19:00:20.927061] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:32.848 [2024-07-14 19:00:20.927080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ad880) on tqpair=0x1246fe0 00:28:32.848 [2024-07-14 19:00:20.927091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.848 [2024-07-14 19:00:20.927100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ada00) on tqpair=0x1246fe0 00:28:32.848 [2024-07-14 19:00:20.927108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.848 [2024-07-14 19:00:20.927116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12adb80) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.850 [2024-07-14 19:00:20.927132] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:32.850 [2024-07-14 19:00:20.927152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.927191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.927213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.927349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.927365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.927372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.927418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.927446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.927546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.927561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.927568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927583] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:32.850 [2024-07-14 19:00:20.927590] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:32.850 [2024-07-14 19:00:20.927607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.927632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.927653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.927738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.927751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.927758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.927807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.927827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.927921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.927935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.927942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.927965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.927981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.927991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.928012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.928099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.928113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.928120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.928143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.928174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.928194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.928276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.928289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.928295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.928318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.928334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.928344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.928365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.931888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.931916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.931924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.931931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.931949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.931959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.931966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1246fe0) 00:28:32.850 [2024-07-14 19:00:20.931976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.850 [2024-07-14 19:00:20.931998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12add00, cid 3, qid 0 00:28:32.850 [2024-07-14 19:00:20.932122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:32.850 [2024-07-14 19:00:20.932136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:32.850 [2024-07-14 19:00:20.932143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:32.850 [2024-07-14 19:00:20.932150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12add00) on tqpair=0x1246fe0 00:28:32.850 [2024-07-14 19:00:20.932163] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:32.850 0% 00:28:32.850 Data Units Read: 0 00:28:32.850 Data Units Written: 0 00:28:32.850 Host Read Commands: 0 00:28:32.850 Host Write Commands: 0 00:28:32.850 Controller Busy Time: 0 minutes 00:28:32.851 Power Cycles: 0 00:28:32.851 Power On Hours: 0 hours 00:28:32.851 Unsafe Shutdowns: 0 00:28:32.851 Unrecoverable Media Errors: 0 00:28:32.851 Lifetime Error Log Entries: 0 00:28:32.851 Warning Temperature Time: 0 minutes 00:28:32.851 Critical Temperature Time: 0 minutes 00:28:32.851 00:28:32.851 Number of Queues 00:28:32.851 ================ 00:28:32.851 Number of I/O Submission Queues: 127 00:28:32.851 Number of I/O Completion Queues: 127 00:28:32.851 00:28:32.851 Active Namespaces 00:28:32.851 ================= 00:28:32.851 Namespace ID:1 00:28:32.851 Error Recovery Timeout: Unlimited 00:28:32.851 Command Set Identifier: NVM (00h) 00:28:32.851 Deallocate: Supported 00:28:32.851 Deallocated/Unwritten Error: Not Supported 00:28:32.851 Deallocated Read Value: Unknown 00:28:32.851 Deallocate in Write Zeroes: Not Supported 00:28:32.851 Deallocated Guard Field: 0xFFFF 00:28:32.851 Flush: Supported 00:28:32.851 Reservation: Supported 00:28:32.851 Namespace Sharing Capabilities: Multiple Controllers 00:28:32.851 Size (in LBAs): 131072 (0GiB) 00:28:32.851 Capacity (in LBAs): 131072 (0GiB) 00:28:32.851 Utilization (in LBAs): 131072 (0GiB) 00:28:32.851 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:32.851 EUI64: ABCDEF0123456789 00:28:32.851 UUID: b7832edb-fd93-4f12-8500-320e0895eb9f 00:28:32.851 Thin Provisioning: Not Supported 00:28:32.851 Per-NS Atomic Units: Yes 00:28:32.851 Atomic Boundary Size (Normal): 0 00:28:32.851 Atomic Boundary Size (PFail): 0 00:28:32.851 Atomic Boundary Offset: 0 00:28:32.851 Maximum Single Source Range Length: 65535 00:28:32.851 Maximum Copy Length: 65535 00:28:32.851 Maximum Source Range Count: 1 00:28:32.851 NGUID/EUI64 Never Reused: No 00:28:32.851 Namespace Write Protected: No 00:28:32.851 Number of LBA Formats: 1 00:28:32.851 Current LBA Format: LBA Format #00 00:28:32.851 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:32.851 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.851 rmmod nvme_tcp 00:28:32.851 rmmod nvme_fabrics 00:28:32.851 rmmod nvme_keyring 00:28:32.851 19:00:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3692436 ']' 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3692436 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3692436 ']' 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3692436 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3692436 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3692436' 00:28:32.851 killing process with pid 3692436 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3692436 00:28:32.851 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3692436 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.108 19:00:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.648 19:00:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.648 00:28:35.648 real 0m5.313s 00:28:35.648 user 0m4.334s 00:28:35.648 sys 0m1.821s 00:28:35.648 19:00:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.648 19:00:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 ************************************ 00:28:35.648 END TEST nvmf_identify 00:28:35.648 ************************************ 00:28:35.648 19:00:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:35.648 19:00:23 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:35.648 19:00:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:35.648 19:00:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.648 19:00:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.648 ************************************ 00:28:35.648 START TEST nvmf_perf 00:28:35.648 ************************************ 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:35.648 * Looking for test storage... 00:28:35.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.648 19:00:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:37.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:37.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.547 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:37.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:37.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:28:37.548 00:28:37.548 --- 10.0.0.2 ping statistics --- 00:28:37.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.548 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:28:37.548 00:28:37.548 --- 10.0.0.1 ping statistics --- 00:28:37.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.548 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3694500 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3694500 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3694500 ']' 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.548 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:37.548 [2024-07-14 19:00:25.707913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:37.548 [2024-07-14 19:00:25.707982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.548 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.809 [2024-07-14 19:00:25.779319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.809 [2024-07-14 19:00:25.873667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.809 [2024-07-14 19:00:25.873734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.809 [2024-07-14 19:00:25.873751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.809 [2024-07-14 19:00:25.873764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.809 [2024-07-14 19:00:25.873775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.809 [2024-07-14 19:00:25.873842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.809 [2024-07-14 19:00:25.873917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.809 [2024-07-14 19:00:25.873956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.809 [2024-07-14 19:00:25.873959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.809 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.809 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:37.809 19:00:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:37.809 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.809 19:00:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:37.809 19:00:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.809 19:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:37.809 19:00:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:41.124 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:41.124 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:41.381 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:41.381 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:41.637 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:41.637 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:41.638 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:41.638 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:41.638 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:41.638 [2024-07-14 19:00:29.845427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.910 19:00:29 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.910 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:41.910 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.167 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:42.167 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:42.424 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.681 [2024-07-14 19:00:30.829040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.681 19:00:30 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:42.938 19:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:42.938 19:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:42.938 19:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:42.938 19:00:31 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:44.307 Initializing NVMe Controllers 00:28:44.307 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:44.307 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:44.307 Initialization complete. Launching workers. 00:28:44.307 ======================================================== 00:28:44.307 Latency(us) 00:28:44.307 Device Information : IOPS MiB/s Average min max 00:28:44.308 PCIE (0000:88:00.0) NSID 1 from core 0: 85083.51 332.36 375.78 45.40 6254.24 00:28:44.308 ======================================================== 00:28:44.308 Total : 85083.51 332.36 375.78 45.40 6254.24 00:28:44.308 00:28:44.308 19:00:32 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.308 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.676 Initializing NVMe Controllers 00:28:45.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.676 Initialization complete. Launching workers. 00:28:45.676 ======================================================== 00:28:45.676 Latency(us) 00:28:45.676 Device Information : IOPS MiB/s Average min max 00:28:45.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.00 0.20 20631.92 157.61 45765.17 00:28:45.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16467.10 6971.16 47925.00 00:28:45.676 ======================================================== 00:28:45.676 Total : 111.00 0.43 18343.15 157.61 47925.00 00:28:45.676 00:28:45.676 19:00:33 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.676 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.605 Initializing NVMe Controllers 00:28:46.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:46.605 Initialization complete. Launching workers. 00:28:46.605 ======================================================== 00:28:46.605 Latency(us) 00:28:46.605 Device Information : IOPS MiB/s Average min max 00:28:46.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8548.75 33.39 3744.25 597.23 7359.99 00:28:46.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3910.51 15.28 8240.39 4236.71 15624.83 00:28:46.605 ======================================================== 00:28:46.605 Total : 12459.27 48.67 5155.42 597.23 15624.83 00:28:46.605 00:28:46.861 19:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:46.861 19:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:46.861 19:00:34 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.861 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.384 Initializing NVMe Controllers 00:28:49.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.384 Controller IO queue size 128, less than required. 00:28:49.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.384 Controller IO queue size 128, less than required. 00:28:49.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:49.384 Initialization complete. Launching workers. 00:28:49.384 ======================================================== 00:28:49.384 Latency(us) 00:28:49.384 Device Information : IOPS MiB/s Average min max 00:28:49.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1689.79 422.45 76853.75 49091.46 139360.01 00:28:49.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.43 147.36 223194.15 114300.85 368558.66 00:28:49.384 ======================================================== 00:28:49.384 Total : 2279.22 569.80 114698.75 49091.46 368558.66 00:28:49.384 00:28:49.384 19:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:49.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.384 No valid NVMe controllers or AIO or URING devices found 00:28:49.384 Initializing NVMe Controllers 00:28:49.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.384 Controller IO queue size 128, less than required. 00:28:49.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.384 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:49.384 Controller IO queue size 128, less than required. 00:28:49.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.384 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:49.384 WARNING: Some requested NVMe devices were skipped 00:28:49.384 19:00:37 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:49.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.664 Initializing NVMe Controllers 00:28:52.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.664 Controller IO queue size 128, less than required. 00:28:52.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:52.664 Controller IO queue size 128, less than required. 00:28:52.664 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:52.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:52.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:52.664 Initialization complete. Launching workers. 00:28:52.664 00:28:52.664 ==================== 00:28:52.664 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:52.664 TCP transport: 00:28:52.664 polls: 12120 00:28:52.664 idle_polls: 8754 00:28:52.664 sock_completions: 3366 00:28:52.664 nvme_completions: 5973 00:28:52.664 submitted_requests: 8916 00:28:52.664 queued_requests: 1 00:28:52.664 00:28:52.664 ==================== 00:28:52.664 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:52.664 TCP transport: 00:28:52.664 polls: 12105 00:28:52.664 idle_polls: 8300 00:28:52.664 sock_completions: 3805 00:28:52.664 nvme_completions: 6585 00:28:52.664 submitted_requests: 9888 00:28:52.664 queued_requests: 1 00:28:52.664 ======================================================== 00:28:52.664 Latency(us) 00:28:52.664 Device Information : IOPS MiB/s Average min max 00:28:52.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1489.90 372.47 89563.71 48651.16 169851.28 00:28:52.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1642.58 410.64 78602.86 47693.55 121761.51 00:28:52.664 ======================================================== 00:28:52.664 Total : 3132.47 783.12 83816.16 47693.55 169851.28 00:28:52.664 00:28:52.664 19:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:52.664 19:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:52.664 19:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:52.664 19:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:52.664 19:00:40 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c920c9cf-25f3-4b07-9974-15d00703e200 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c920c9cf-25f3-4b07-9974-15d00703e200 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c920c9cf-25f3-4b07-9974-15d00703e200 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:55.934 19:00:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:55.934 { 00:28:55.934 "uuid": "c920c9cf-25f3-4b07-9974-15d00703e200", 00:28:55.934 "name": "lvs_0", 00:28:55.934 "base_bdev": "Nvme0n1", 00:28:55.934 "total_data_clusters": 238234, 00:28:55.934 "free_clusters": 238234, 00:28:55.934 "block_size": 512, 00:28:55.934 "cluster_size": 4194304 00:28:55.934 } 00:28:55.934 ]' 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c920c9cf-25f3-4b07-9974-15d00703e200") .free_clusters' 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c920c9cf-25f3-4b07-9974-15d00703e200") .cluster_size' 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:55.934 952936 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:55.934 19:00:44 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c920c9cf-25f3-4b07-9974-15d00703e200 lbd_0 20480 00:28:56.880 19:00:44 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=42fa36af-8394-4a4a-8c0b-451d58cc9ccc 00:28:56.880 19:00:44 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 42fa36af-8394-4a4a-8c0b-451d58cc9ccc lvs_n_0 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=c008b815-cbdc-41d9-b36b-ea6f07d0ac4f 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb c008b815-cbdc-41d9-b36b-ea6f07d0ac4f 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c008b815-cbdc-41d9-b36b-ea6f07d0ac4f 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:57.444 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:57.701 { 00:28:57.701 "uuid": "c920c9cf-25f3-4b07-9974-15d00703e200", 00:28:57.701 "name": "lvs_0", 00:28:57.701 "base_bdev": "Nvme0n1", 00:28:57.701 "total_data_clusters": 238234, 00:28:57.701 "free_clusters": 233114, 00:28:57.701 "block_size": 512, 00:28:57.701 "cluster_size": 4194304 00:28:57.701 }, 00:28:57.701 { 00:28:57.701 "uuid": "c008b815-cbdc-41d9-b36b-ea6f07d0ac4f", 00:28:57.701 "name": "lvs_n_0", 00:28:57.701 "base_bdev": "42fa36af-8394-4a4a-8c0b-451d58cc9ccc", 00:28:57.701 "total_data_clusters": 5114, 00:28:57.701 "free_clusters": 5114, 00:28:57.701 "block_size": 512, 00:28:57.701 "cluster_size": 4194304 00:28:57.701 } 00:28:57.701 ]' 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c008b815-cbdc-41d9-b36b-ea6f07d0ac4f") .free_clusters' 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c008b815-cbdc-41d9-b36b-ea6f07d0ac4f") .cluster_size' 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:57.701 20456 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:57.701 19:00:45 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c008b815-cbdc-41d9-b36b-ea6f07d0ac4f lbd_nest_0 20456 00:28:57.958 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2aada9d7-2ee1-4184-82b5-851adcb86feb 00:28:57.958 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.215 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:58.215 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2aada9d7-2ee1-4184-82b5-851adcb86feb 00:28:58.471 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.728 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:58.728 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:58.728 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:58.728 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:58.729 19:00:46 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:58.729 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.945 Initializing NVMe Controllers 00:29:10.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:10.945 Initialization complete. Launching workers. 00:29:10.945 ======================================================== 00:29:10.945 Latency(us) 00:29:10.945 Device Information : IOPS MiB/s Average min max 00:29:10.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.59 0.02 22453.81 175.41 48725.88 00:29:10.945 ======================================================== 00:29:10.945 Total : 44.59 0.02 22453.81 175.41 48725.88 00:29:10.945 00:29:10.945 19:00:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:10.945 19:00:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.945 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.900 Initializing NVMe Controllers 00:29:20.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:20.900 Initialization complete. Launching workers. 00:29:20.900 ======================================================== 00:29:20.900 Latency(us) 00:29:20.900 Device Information : IOPS MiB/s Average min max 00:29:20.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.78 9.22 13563.75 5829.11 48828.37 00:29:20.900 ======================================================== 00:29:20.900 Total : 73.78 9.22 13563.75 5829.11 48828.37 00:29:20.900 00:29:20.900 19:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:20.900 19:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:20.900 19:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.900 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.848 Initializing NVMe Controllers 00:29:30.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:30.848 Initialization complete. Launching workers. 00:29:30.848 ======================================================== 00:29:30.848 Latency(us) 00:29:30.848 Device Information : IOPS MiB/s Average min max 00:29:30.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7698.17 3.76 4164.43 303.26 47887.33 00:29:30.848 ======================================================== 00:29:30.848 Total : 7698.17 3.76 4164.43 303.26 47887.33 00:29:30.848 00:29:30.848 19:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:30.848 19:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:30.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.825 Initializing NVMe Controllers 00:29:40.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.825 Initialization complete. Launching workers. 00:29:40.825 ======================================================== 00:29:40.825 Latency(us) 00:29:40.825 Device Information : IOPS MiB/s Average min max 00:29:40.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3973.50 496.69 8058.55 735.44 17023.78 00:29:40.825 ======================================================== 00:29:40.825 Total : 3973.50 496.69 8058.55 735.44 17023.78 00:29:40.825 00:29:40.825 19:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:40.825 19:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:40.825 19:01:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.825 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.810 Initializing NVMe Controllers 00:29:50.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.810 Controller IO queue size 128, less than required. 00:29:50.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.810 Initialization complete. Launching workers. 00:29:50.810 ======================================================== 00:29:50.810 Latency(us) 00:29:50.810 Device Information : IOPS MiB/s Average min max 00:29:50.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11860.67 5.79 10791.86 1678.42 28999.90 00:29:50.810 ======================================================== 00:29:50.810 Total : 11860.67 5.79 10791.86 1678.42 28999.90 00:29:50.810 00:29:50.810 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:50.810 19:01:38 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.810 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.812 Initializing NVMe Controllers 00:30:00.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.812 Controller IO queue size 128, less than required. 00:30:00.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.812 Initialization complete. Launching workers. 00:30:00.812 ======================================================== 00:30:00.812 Latency(us) 00:30:00.812 Device Information : IOPS MiB/s Average min max 00:30:00.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.42 148.80 107798.95 24188.65 198677.48 00:30:00.812 ======================================================== 00:30:00.812 Total : 1190.42 148.80 107798.95 24188.65 198677.48 00:30:00.812 00:30:00.812 19:01:48 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.070 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2aada9d7-2ee1-4184-82b5-851adcb86feb 00:30:01.638 19:01:49 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:01.896 19:01:50 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 42fa36af-8394-4a4a-8c0b-451d58cc9ccc 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.463 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.463 rmmod nvme_tcp 00:30:02.722 rmmod nvme_fabrics 00:30:02.722 rmmod nvme_keyring 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3694500 ']' 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3694500 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3694500 ']' 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3694500 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3694500 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3694500' 00:30:02.722 killing process with pid 3694500 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3694500 00:30:02.722 19:01:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3694500 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.626 19:01:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.554 19:01:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.554 00:30:06.554 real 1m31.059s 00:30:06.554 user 5m35.493s 00:30:06.554 sys 0m16.326s 00:30:06.554 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.554 19:01:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.554 ************************************ 00:30:06.554 END TEST nvmf_perf 00:30:06.554 ************************************ 00:30:06.554 19:01:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:06.554 19:01:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:06.554 19:01:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:06.554 19:01:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.554 19:01:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.554 ************************************ 00:30:06.554 START TEST nvmf_fio_host 00:30:06.554 ************************************ 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:06.554 * Looking for test storage... 00:30:06.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.554 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:06.555 19:01:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:08.460 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:08.460 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:08.460 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.460 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:08.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.461 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:30:08.719 00:30:08.719 --- 10.0.0.2 ping statistics --- 00:30:08.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.719 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:30:08.719 00:30:08.719 --- 10.0.0.1 ping statistics --- 00:30:08.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.719 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3706513 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3706513 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3706513 ']' 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:08.719 19:01:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.719 [2024-07-14 19:01:56.784204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:08.719 [2024-07-14 19:01:56.784295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.719 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.719 [2024-07-14 19:01:56.859672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.977 [2024-07-14 19:01:56.945810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.977 [2024-07-14 19:01:56.945860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.977 [2024-07-14 19:01:56.945895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.977 [2024-07-14 19:01:56.945908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.977 [2024-07-14 19:01:56.945917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.977 [2024-07-14 19:01:56.945981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.977 [2024-07-14 19:01:56.946018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.977 [2024-07-14 19:01:56.946074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.977 [2024-07-14 19:01:56.946077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.977 19:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.977 19:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:08.977 19:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.235 [2024-07-14 19:01:57.284302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.235 19:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:09.235 19:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.235 19:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.235 19:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:09.493 Malloc1 00:30:09.493 19:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.751 19:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:10.009 19:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.266 [2024-07-14 19:01:58.318502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.266 19:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.524 19:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.781 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:10.781 fio-3.35 00:30:10.781 Starting 1 thread 00:30:10.781 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.310 00:30:13.310 test: (groupid=0, jobs=1): err= 0: pid=3706941: Sun Jul 14 19:02:01 2024 00:30:13.310 read: IOPS=9081, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec) 00:30:13.310 slat (usec): min=2, max=114, avg= 2.71, stdev= 1.53 00:30:13.310 clat (usec): min=2432, max=13508, avg=7736.85, stdev=616.24 00:30:13.310 lat (usec): min=2456, max=13511, avg=7739.56, stdev=616.18 00:30:13.310 clat percentiles (usec): 00:30:13.310 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:30:13.310 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:30:13.310 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8717], 00:30:13.310 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[12780], 00:30:13.310 | 99.99th=[13435] 00:30:13.310 bw ( KiB/s): min=35712, max=36704, per=99.89%, avg=36288.00, stdev=428.33, samples=4 00:30:13.310 iops : min= 8928, max= 9176, avg=9072.00, stdev=107.08, samples=4 00:30:13.310 write: IOPS=9091, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2006msec); 0 zone resets 00:30:13.310 slat (nsec): min=2269, max=95649, avg=2834.27, stdev=1185.91 00:30:13.310 clat (usec): min=1015, max=12131, avg=6311.03, stdev=511.38 00:30:13.310 lat (usec): min=1020, max=12134, avg=6313.87, stdev=511.36 00:30:13.310 clat percentiles (usec): 00:30:13.310 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:30:13.310 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6390], 00:30:13.310 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:30:13.310 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 9372], 99.95th=[11207], 00:30:13.310 | 99.99th=[12125] 00:30:13.310 bw ( KiB/s): min=35808, max=36624, per=100.00%, avg=36370.00, stdev=380.00, samples=4 00:30:13.310 iops : min= 8952, max= 9156, avg=9092.50, stdev=95.00, samples=4 00:30:13.310 lat (msec) : 2=0.03%, 4=0.11%, 10=99.72%, 20=0.14% 00:30:13.310 cpu : usr=62.39%, sys=35.01%, ctx=40, majf=0, minf=7 00:30:13.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:13.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.310 issued rwts: total=18218,18238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.310 00:30:13.310 Run status group 0 (all jobs): 00:30:13.310 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.6MB), run=2006-2006msec 00:30:13.310 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2006-2006msec 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:13.310 19:02:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.310 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:13.310 fio-3.35 00:30:13.310 Starting 1 thread 00:30:13.310 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.835 00:30:15.835 test: (groupid=0, jobs=1): err= 0: pid=3707267: Sun Jul 14 19:02:03 2024 00:30:15.835 read: IOPS=8596, BW=134MiB/s (141MB/s)(269MiB/2005msec) 00:30:15.835 slat (nsec): min=2779, max=93048, avg=3691.55, stdev=1631.44 00:30:15.835 clat (usec): min=1816, max=17291, avg=8680.79, stdev=1971.81 00:30:15.835 lat (usec): min=1820, max=17297, avg=8684.48, stdev=1971.84 00:30:15.835 clat percentiles (usec): 00:30:15.835 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 7046], 00:30:15.835 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:30:15.835 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11076], 95.00th=[11994], 00:30:15.835 | 99.00th=[13960], 99.50th=[14222], 99.90th=[16188], 99.95th=[16909], 00:30:15.835 | 99.99th=[17171] 00:30:15.835 bw ( KiB/s): min=60832, max=77728, per=50.60%, avg=69600.00, stdev=7139.04, samples=4 00:30:15.835 iops : min= 3802, max= 4858, avg=4350.00, stdev=446.19, samples=4 00:30:15.835 write: IOPS=4932, BW=77.1MiB/s (80.8MB/s)(142MiB/1846msec); 0 zone resets 00:30:15.835 slat (usec): min=30, max=199, avg=34.09, stdev= 5.71 00:30:15.835 clat (usec): min=3297, max=19405, avg=11280.54, stdev=2033.84 00:30:15.835 lat (usec): min=3328, max=19456, avg=11314.63, stdev=2034.19 00:30:15.835 clat percentiles (usec): 00:30:15.835 | 1.00th=[ 7308], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:15.835 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:30:15.835 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14091], 95.00th=[14877], 00:30:15.835 | 99.00th=[16581], 99.50th=[17171], 99.90th=[18482], 99.95th=[18744], 00:30:15.835 | 99.99th=[19530] 00:30:15.835 bw ( KiB/s): min=63232, max=80288, per=91.96%, avg=72568.00, stdev=7221.58, samples=4 00:30:15.835 iops : min= 3952, max= 5018, avg=4535.50, stdev=451.35, samples=4 00:30:15.835 lat (msec) : 2=0.02%, 4=0.20%, 10=59.93%, 20=39.85% 00:30:15.835 cpu : usr=74.65%, sys=23.35%, ctx=37, majf=0, minf=3 00:30:15.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:15.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.835 issued rwts: total=17236,9105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.835 00:30:15.835 Run status group 0 (all jobs): 00:30:15.835 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=269MiB (282MB), run=2005-2005msec 00:30:15.835 WRITE: bw=77.1MiB/s (80.8MB/s), 77.1MiB/s-77.1MiB/s (80.8MB/s-80.8MB/s), io=142MiB (149MB), run=1846-1846msec 00:30:15.835 19:02:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:15.835 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:16.092 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:16.092 19:02:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:16.092 19:02:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:19.371 Nvme0n1 00:30:19.371 19:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=af51a7e2-a89c-434c-8428-83111e310adb 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb af51a7e2-a89c-434c-8428-83111e310adb 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=af51a7e2-a89c-434c-8428-83111e310adb 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:21.900 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:22.158 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:22.158 { 00:30:22.158 "uuid": "af51a7e2-a89c-434c-8428-83111e310adb", 00:30:22.158 "name": "lvs_0", 00:30:22.158 "base_bdev": "Nvme0n1", 00:30:22.158 "total_data_clusters": 930, 00:30:22.158 "free_clusters": 930, 00:30:22.158 "block_size": 512, 00:30:22.158 "cluster_size": 1073741824 00:30:22.158 } 00:30:22.158 ]' 00:30:22.158 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="af51a7e2-a89c-434c-8428-83111e310adb") .free_clusters' 00:30:22.158 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:22.158 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="af51a7e2-a89c-434c-8428-83111e310adb") .cluster_size' 00:30:22.416 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:22.416 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:22.416 19:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:22.416 952320 00:30:22.416 19:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:22.674 e432edc6-6c19-46a7-ae12-d68cb9611db4 00:30:22.674 19:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:22.931 19:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:23.189 19:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:23.447 19:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.705 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:23.705 fio-3.35 00:30:23.705 Starting 1 thread 00:30:23.705 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.242 00:30:26.242 test: (groupid=0, jobs=1): err= 0: pid=3708547: Sun Jul 14 19:02:14 2024 00:30:26.242 read: IOPS=5928, BW=23.2MiB/s (24.3MB/s)(46.5MiB/2009msec) 00:30:26.242 slat (usec): min=2, max=204, avg= 2.74, stdev= 2.65 00:30:26.242 clat (usec): min=897, max=171197, avg=11779.05, stdev=11687.08 00:30:26.242 lat (usec): min=901, max=171237, avg=11781.79, stdev=11687.50 00:30:26.242 clat percentiles (msec): 00:30:26.242 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:26.242 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:26.242 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:30:26.242 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:26.242 | 99.99th=[ 171] 00:30:26.242 bw ( KiB/s): min=16488, max=26328, per=99.83%, avg=23672.00, stdev=4793.65, samples=4 00:30:26.242 iops : min= 4122, max= 6582, avg=5918.00, stdev=1198.41, samples=4 00:30:26.242 write: IOPS=5922, BW=23.1MiB/s (24.3MB/s)(46.5MiB/2009msec); 0 zone resets 00:30:26.242 slat (usec): min=2, max=139, avg= 2.87, stdev= 1.76 00:30:26.242 clat (usec): min=296, max=169245, avg=9620.56, stdev=10966.84 00:30:26.242 lat (usec): min=300, max=169253, avg=9623.43, stdev=10967.22 00:30:26.242 clat percentiles (msec): 00:30:26.242 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:26.242 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:30:26.242 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:30:26.242 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:30:26.242 | 99.99th=[ 169] 00:30:26.242 bw ( KiB/s): min=17512, max=25920, per=99.99%, avg=23690.00, stdev=4120.43, samples=4 00:30:26.242 iops : min= 4378, max= 6480, avg=5922.50, stdev=1030.11, samples=4 00:30:26.242 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:26.242 lat (msec) : 2=0.03%, 4=0.13%, 10=54.35%, 20=44.92%, 250=0.54% 00:30:26.242 cpu : usr=56.57%, sys=41.14%, ctx=134, majf=0, minf=25 00:30:26.242 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:26.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:26.242 issued rwts: total=11910,11899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.242 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:26.242 00:30:26.242 Run status group 0 (all jobs): 00:30:26.242 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.8MB), run=2009-2009msec 00:30:26.242 WRITE: bw=23.1MiB/s (24.3MB/s), 23.1MiB/s-23.1MiB/s (24.3MB/s-24.3MB/s), io=46.5MiB (48.7MB), run=2009-2009msec 00:30:26.242 19:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:26.242 19:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:27.615 19:02:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=40d22dc0-e085-4b10-9fae-31d1d5a7e391 00:30:27.615 19:02:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 40d22dc0-e085-4b10-9fae-31d1d5a7e391 00:30:27.615 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=40d22dc0-e085-4b10-9fae-31d1d5a7e391 00:30:27.615 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:27.615 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:27.616 { 00:30:27.616 "uuid": "af51a7e2-a89c-434c-8428-83111e310adb", 00:30:27.616 "name": "lvs_0", 00:30:27.616 "base_bdev": "Nvme0n1", 00:30:27.616 "total_data_clusters": 930, 00:30:27.616 "free_clusters": 0, 00:30:27.616 "block_size": 512, 00:30:27.616 "cluster_size": 1073741824 00:30:27.616 }, 00:30:27.616 { 00:30:27.616 "uuid": "40d22dc0-e085-4b10-9fae-31d1d5a7e391", 00:30:27.616 "name": "lvs_n_0", 00:30:27.616 "base_bdev": "e432edc6-6c19-46a7-ae12-d68cb9611db4", 00:30:27.616 "total_data_clusters": 237847, 00:30:27.616 "free_clusters": 237847, 00:30:27.616 "block_size": 512, 00:30:27.616 "cluster_size": 4194304 00:30:27.616 } 00:30:27.616 ]' 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="40d22dc0-e085-4b10-9fae-31d1d5a7e391") .free_clusters' 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="40d22dc0-e085-4b10-9fae-31d1d5a7e391") .cluster_size' 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:27.616 951388 00:30:27.616 19:02:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:28.546 5b58a3aa-34d2-46fc-874e-3e2a6328c761 00:30:28.546 19:02:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:28.546 19:02:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:28.803 19:02:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:29.059 19:02:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.059 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.059 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:29.059 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:29.060 19:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.316 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:29.316 fio-3.35 00:30:29.316 Starting 1 thread 00:30:29.316 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.838 00:30:31.838 test: (groupid=0, jobs=1): err= 0: pid=3709284: Sun Jul 14 19:02:19 2024 00:30:31.838 read: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2008msec) 00:30:31.838 slat (usec): min=2, max=165, avg= 2.67, stdev= 2.42 00:30:31.838 clat (usec): min=4554, max=21193, avg=12127.02, stdev=1130.73 00:30:31.838 lat (usec): min=4571, max=21195, avg=12129.69, stdev=1130.58 00:30:31.838 clat percentiles (usec): 00:30:31.838 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10683], 20.00th=[11207], 00:30:31.838 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:30:31.838 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:31.838 | 99.00th=[14484], 99.50th=[14877], 99.90th=[18482], 99.95th=[20055], 00:30:31.838 | 99.99th=[20317] 00:30:31.838 bw ( KiB/s): min=22016, max=23760, per=99.75%, avg=23110.00, stdev=758.31, samples=4 00:30:31.838 iops : min= 5504, max= 5940, avg=5777.50, stdev=189.58, samples=4 00:30:31.838 write: IOPS=5772, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2008msec); 0 zone resets 00:30:31.838 slat (usec): min=2, max=137, avg= 2.78, stdev= 1.88 00:30:31.838 clat (usec): min=2230, max=18577, avg=9892.02, stdev=915.08 00:30:31.838 lat (usec): min=2250, max=18580, avg=9894.79, stdev=914.98 00:30:31.838 clat percentiles (usec): 00:30:31.838 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:30:31.838 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:30:31.838 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:30:31.838 | 99.00th=[11863], 99.50th=[12125], 99.90th=[15533], 99.95th=[16909], 00:30:31.838 | 99.99th=[18482] 00:30:31.838 bw ( KiB/s): min=23056, max=23104, per=99.95%, avg=23078.00, stdev=22.03, samples=4 00:30:31.838 iops : min= 5764, max= 5776, avg=5769.50, stdev= 5.51, samples=4 00:30:31.838 lat (msec) : 4=0.05%, 10=28.47%, 20=71.46%, 50=0.02% 00:30:31.838 cpu : usr=58.55%, sys=39.21%, ctx=106, majf=0, minf=25 00:30:31.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:31.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.838 issued rwts: total=11630,11591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.838 00:30:31.838 Run status group 0 (all jobs): 00:30:31.838 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2008-2008msec 00:30:31.838 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2008-2008msec 00:30:31.838 19:02:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:32.096 19:02:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:32.096 19:02:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:36.275 19:02:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:36.275 19:02:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:39.551 19:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:39.551 19:02:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.455 rmmod nvme_tcp 00:30:41.455 rmmod nvme_fabrics 00:30:41.455 rmmod nvme_keyring 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3706513 ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3706513 ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3706513' 00:30:41.455 killing process with pid 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3706513 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.455 19:02:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.049 19:02:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:44.049 00:30:44.049 real 0m37.199s 00:30:44.049 user 2m22.098s 00:30:44.049 sys 0m7.243s 00:30:44.049 19:02:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.049 19:02:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.049 ************************************ 00:30:44.049 END TEST nvmf_fio_host 00:30:44.049 ************************************ 00:30:44.049 19:02:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:44.049 19:02:31 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:44.049 19:02:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:44.049 19:02:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.049 19:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:44.049 ************************************ 00:30:44.049 START TEST nvmf_failover 00:30:44.049 ************************************ 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:44.049 * Looking for test storage... 00:30:44.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.049 19:02:31 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.050 19:02:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.426 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.427 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:45.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:30:45.685 00:30:45.685 --- 10.0.0.2 ping statistics --- 00:30:45.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.685 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:45.685 00:30:45.685 --- 10.0.0.1 ping statistics --- 00:30:45.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.685 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3712524 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3712524 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3712524 ']' 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:45.685 19:02:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.685 [2024-07-14 19:02:33.836575] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:45.685 [2024-07-14 19:02:33.836664] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.685 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.685 [2024-07-14 19:02:33.908220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:45.942 [2024-07-14 19:02:34.000753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.942 [2024-07-14 19:02:34.000807] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.942 [2024-07-14 19:02:34.000824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.942 [2024-07-14 19:02:34.000837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.942 [2024-07-14 19:02:34.000849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.942 [2024-07-14 19:02:34.000952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.942 [2024-07-14 19:02:34.001000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.942 [2024-07-14 19:02:34.001002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.942 19:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:46.199 [2024-07-14 19:02:34.365662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.199 19:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:46.457 Malloc0 00:30:46.457 19:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:46.714 19:02:34 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:46.972 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.229 [2024-07-14 19:02:35.386964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.229 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.487 [2024-07-14 19:02:35.635669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:47.487 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:47.744 [2024-07-14 19:02:35.876458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3712812 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3712812 /var/tmp/bdevperf.sock 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3712812 ']' 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:47.744 19:02:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:48.001 19:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.001 19:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:48.001 19:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.565 NVMe0n1 00:30:48.565 19:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.822 00:30:49.080 19:02:37 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3712948 00:30:49.080 19:02:37 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.080 19:02:37 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:50.018 19:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.277 [2024-07-14 19:02:38.291102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 [2024-07-14 19:02:38.291681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3c970 is same with the state(5) to be set 00:30:50.277 19:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:53.575 19:02:41 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.575 00:30:53.575 19:02:41 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:53.833 19:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:57.128 19:02:45 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.128 [2024-07-14 19:02:45.264459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.128 19:02:45 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:58.068 19:02:46 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:58.326 [2024-07-14 19:02:46.519633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7fa0 is same with the state(5) to be set 00:30:58.326 [2024-07-14 19:02:46.519699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7fa0 is same with the state(5) to be set 00:30:58.326 [2024-07-14 19:02:46.519730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7fa0 is same with the state(5) to be set 00:30:58.326 [2024-07-14 19:02:46.519743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef7fa0 is same with the state(5) to be set 00:30:58.326 19:02:46 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3712948 00:31:04.901 0 00:31:04.901 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3712812 00:31:04.901 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3712812 ']' 00:31:04.901 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3712812 00:31:04.901 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3712812 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3712812' 00:31:04.902 killing process with pid 3712812 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3712812 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3712812 00:31:04.902 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:04.902 [2024-07-14 19:02:35.939462] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:04.902 [2024-07-14 19:02:35.939540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712812 ] 00:31:04.902 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.902 [2024-07-14 19:02:35.999961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.902 [2024-07-14 19:02:36.086336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.902 Running I/O for 15 seconds... 00:31:04.902 [2024-07-14 19:02:38.292798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.292841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.292870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.292895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.292914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.292929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.292945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.292960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.292975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.292990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.293977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.293991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.294007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.902 [2024-07-14 19:02:38.294024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.902 [2024-07-14 19:02:38.294040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.902 [2024-07-14 19:02:38.294053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.294981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.294995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.903 [2024-07-14 19:02:38.295393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.903 [2024-07-14 19:02:38.295407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.904 [2024-07-14 19:02:38.295439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.295960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.295974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.295985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.295996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:31:04.904 [2024-07-14 19:02:38.296688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.904 [2024-07-14 19:02:38.296700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.904 [2024-07-14 19:02:38.296712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.904 [2024-07-14 19:02:38.296723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.296748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.296766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.296778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.296804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.296815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.296853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.296864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.296883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.296911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.296923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.296934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.296959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.296970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.296982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.296998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76944 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.905 [2024-07-14 19:02:38.297464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.905 [2024-07-14 19:02:38.297475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76952 len:8 PRP1 0x0 PRP2 0x0 00:31:04.905 [2024-07-14 19:02:38.297487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297542] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bac250 was disconnected and freed. reset controller. 00:31:04.905 [2024-07-14 19:02:38.297560] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:04.905 [2024-07-14 19:02:38.297593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.905 [2024-07-14 19:02:38.297611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.905 [2024-07-14 19:02:38.297645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.905 [2024-07-14 19:02:38.297673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.905 [2024-07-14 19:02:38.297699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:38.297721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.905 [2024-07-14 19:02:38.297764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85bd0 (9): Bad file descriptor 00:31:04.905 [2024-07-14 19:02:38.301000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.905 [2024-07-14 19:02:38.464685] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:04.905 [2024-07-14 19:02:41.991962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.905 [2024-07-14 19:02:41.992030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:41.992059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.905 [2024-07-14 19:02:41.992075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:41.992103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.905 [2024-07-14 19:02:41.992118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:41.992134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.905 [2024-07-14 19:02:41.992164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.905 [2024-07-14 19:02:41.992180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.992980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.992995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.906 [2024-07-14 19:02:41.993229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.906 [2024-07-14 19:02:41.993446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.906 [2024-07-14 19:02:41.993460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.993977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.993993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.907 [2024-07-14 19:02:41.994692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.907 [2024-07-14 19:02:41.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111976 len:8 PRP1 0x0 PRP2 0x0 00:31:04.907 [2024-07-14 19:02:41.994758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.907 [2024-07-14 19:02:41.994788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.907 [2024-07-14 19:02:41.994800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111984 len:8 PRP1 0x0 PRP2 0x0 00:31:04.907 [2024-07-14 19:02:41.994813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.907 [2024-07-14 19:02:41.994843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.907 [2024-07-14 19:02:41.994854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111992 len:8 PRP1 0x0 PRP2 0x0 00:31:04.907 [2024-07-14 19:02:41.994867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.907 [2024-07-14 19:02:41.994889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.907 [2024-07-14 19:02:41.994902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.907 [2024-07-14 19:02:41.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112000 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.994927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.994940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.994951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.994962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112008 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.994975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.994989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112016 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112024 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112032 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112040 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112048 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112056 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112064 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112072 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112080 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112088 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112096 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112104 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112112 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112120 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112128 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112136 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112144 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112152 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112160 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.995954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112168 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.995979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.995994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112176 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.996018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.996032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.996043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.996055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112184 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.996067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.996080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.996091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.996102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112192 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.996115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.996128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.996139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.996151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.996163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.908 [2024-07-14 19:02:41.996176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.908 [2024-07-14 19:02:41.996187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.908 [2024-07-14 19:02:41.996198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112208 len:8 PRP1 0x0 PRP2 0x0 00:31:04.908 [2024-07-14 19:02:41.996211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112216 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112224 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112232 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112240 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112248 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112256 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112264 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112272 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112280 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111576 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.909 [2024-07-14 19:02:41.996721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.909 [2024-07-14 19:02:41.996732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111584 len:8 PRP1 0x0 PRP2 0x0 00:31:04.909 [2024-07-14 19:02:41.996748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996821] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d50a00 was disconnected and freed. reset controller. 00:31:04.909 [2024-07-14 19:02:41.996840] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:04.909 [2024-07-14 19:02:41.996874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.909 [2024-07-14 19:02:41.996899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.909 [2024-07-14 19:02:41.996936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.909 [2024-07-14 19:02:41.996963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.996976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.909 [2024-07-14 19:02:41.996989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:41.997001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.909 [2024-07-14 19:02:41.997040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85bd0 (9): Bad file descriptor 00:31:04.909 [2024-07-14 19:02:42.000306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.909 [2024-07-14 19:02:42.166038] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:04.909 [2024-07-14 19:02:46.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.909 [2024-07-14 19:02:46.520358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.909 [2024-07-14 19:02:46.520404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.909 [2024-07-14 19:02:46.520436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.909 [2024-07-14 19:02:46.520466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.909 [2024-07-14 19:02:46.520495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.909 [2024-07-14 19:02:46.520776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.909 [2024-07-14 19:02:46.520791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.520995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.910 [2024-07-14 19:02:46.521243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.910 [2024-07-14 19:02:46.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.911 [2024-07-14 19:02:46.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.911 [2024-07-14 19:02:46.521940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.911 [2024-07-14 19:02:46.521969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.521984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.521998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.911 [2024-07-14 19:02:46.522599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.911 [2024-07-14 19:02:46.522613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.522974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.522988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:04.912 [2024-07-14 19:02:46.523680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.912 [2024-07-14 19:02:46.523967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.912 [2024-07-14 19:02:46.523980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72800 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.913 [2024-07-14 19:02:46.524070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72808 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.913 [2024-07-14 19:02:46.524119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72816 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.913 [2024-07-14 19:02:46.524166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72824 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.913 [2024-07-14 19:02:46.524214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72832 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:04.913 [2024-07-14 19:02:46.524262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:04.913 [2024-07-14 19:02:46.524273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72840 len:8 PRP1 0x0 PRP2 0x0 00:31:04.913 [2024-07-14 19:02:46.524285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524341] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d507f0 was disconnected and freed. reset controller. 00:31:04.913 [2024-07-14 19:02:46.524363] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:04.913 [2024-07-14 19:02:46.524396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.913 [2024-07-14 19:02:46.524414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.913 [2024-07-14 19:02:46.524442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.913 [2024-07-14 19:02:46.524469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.913 [2024-07-14 19:02:46.524496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.913 [2024-07-14 19:02:46.524509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.913 [2024-07-14 19:02:46.524560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85bd0 (9): Bad file descriptor 00:31:04.913 [2024-07-14 19:02:46.527766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.913 [2024-07-14 19:02:46.685117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:04.913 00:31:04.913 Latency(us) 00:31:04.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.913 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:04.913 Verification LBA range: start 0x0 length 0x4000 00:31:04.913 NVMe0n1 : 15.01 8343.50 32.59 1274.18 0.00 13280.50 546.13 15243.19 00:31:04.913 =================================================================================================================== 00:31:04.913 Total : 8343.50 32.59 1274.18 0.00 13280.50 546.13 15243.19 00:31:04.913 Received shutdown signal, test time was about 15.000000 seconds 00:31:04.913 00:31:04.913 Latency(us) 00:31:04.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.913 =================================================================================================================== 00:31:04.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3714783 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3714783 /var/tmp/bdevperf.sock 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3714783 ']' 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:04.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:04.913 [2024-07-14 19:02:52.973334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:04.913 19:02:52 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:05.193 [2024-07-14 19:02:53.230038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:05.193 19:02:53 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.451 NVMe0n1 00:31:05.710 19:02:53 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.969 00:31:05.969 19:02:53 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.226 00:31:06.226 19:02:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.226 19:02:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:06.483 19:02:54 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.744 19:02:54 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:10.032 19:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:10.032 19:02:57 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:10.032 19:02:58 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3715454 00:31:10.032 19:02:58 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:10.032 19:02:58 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3715454 00:31:11.406 0 00:31:11.406 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:11.406 [2024-07-14 19:02:52.502222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:11.406 [2024-07-14 19:02:52.502312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714783 ] 00:31:11.406 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.406 [2024-07-14 19:02:52.563633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.406 [2024-07-14 19:02:52.646698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.406 [2024-07-14 19:02:54.827047] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:11.406 [2024-07-14 19:02:54.827116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.406 [2024-07-14 19:02:54.827137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.406 [2024-07-14 19:02:54.827152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.406 [2024-07-14 19:02:54.827165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.406 [2024-07-14 19:02:54.827179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.406 [2024-07-14 19:02:54.827192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.406 [2024-07-14 19:02:54.827206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.406 [2024-07-14 19:02:54.827218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.406 [2024-07-14 19:02:54.827232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.406 [2024-07-14 19:02:54.827272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.406 [2024-07-14 19:02:54.827302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816bd0 (9): Bad file descriptor 00:31:11.406 [2024-07-14 19:02:54.874347] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:11.406 Running I/O for 1 seconds... 00:31:11.406 00:31:11.406 Latency(us) 00:31:11.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.406 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:11.406 Verification LBA range: start 0x0 length 0x4000 00:31:11.406 NVMe0n1 : 1.01 8442.82 32.98 0.00 0.00 15097.70 658.39 13495.56 00:31:11.406 =================================================================================================================== 00:31:11.406 Total : 8442.82 32.98 0.00 0.00 15097.70 658.39 13495.56 00:31:11.406 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.406 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:11.406 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:11.664 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.664 19:02:59 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:11.921 19:03:00 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:12.178 19:03:00 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:15.462 19:03:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3714783 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3714783 ']' 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3714783 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3714783 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3714783' 00:31:15.463 killing process with pid 3714783 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3714783 00:31:15.463 19:03:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3714783 00:31:15.720 19:03:03 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:15.720 19:03:03 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.976 rmmod nvme_tcp 00:31:15.976 rmmod nvme_fabrics 00:31:15.976 rmmod nvme_keyring 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3712524 ']' 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3712524 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3712524 ']' 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3712524 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3712524 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3712524' 00:31:15.976 killing process with pid 3712524 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3712524 00:31:15.976 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3712524 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.232 19:03:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.763 19:03:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.764 00:31:18.764 real 0m34.758s 00:31:18.764 user 2m2.204s 00:31:18.764 sys 0m6.049s 00:31:18.764 19:03:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:18.764 19:03:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:18.764 ************************************ 00:31:18.764 END TEST nvmf_failover 00:31:18.764 ************************************ 00:31:18.764 19:03:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:18.764 19:03:06 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:18.764 19:03:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:18.764 19:03:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.764 19:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.764 ************************************ 00:31:18.764 START TEST nvmf_host_discovery 00:31:18.764 ************************************ 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:18.764 * Looking for test storage... 00:31:18.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.764 19:03:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:20.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:20.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:20.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.662 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:20.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:31:20.663 00:31:20.663 --- 10.0.0.2 ping statistics --- 00:31:20.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.663 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:31:20.663 00:31:20.663 --- 10.0.0.1 ping statistics --- 00:31:20.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.663 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3718578 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3718578 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3718578 ']' 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:20.663 19:03:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.663 [2024-07-14 19:03:08.725432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:20.663 [2024-07-14 19:03:08.725529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.663 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.663 [2024-07-14 19:03:08.797781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.920 [2024-07-14 19:03:08.892896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.920 [2024-07-14 19:03:08.892956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.921 [2024-07-14 19:03:08.892970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.921 [2024-07-14 19:03:08.892982] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.921 [2024-07-14 19:03:08.892993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.921 [2024-07-14 19:03:08.893028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 [2024-07-14 19:03:09.032949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 [2024-07-14 19:03:09.041126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 null0 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 null1 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3718801 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3718801 /tmp/host.sock 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3718801 ']' 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:20.921 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:20.921 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.921 [2024-07-14 19:03:09.113943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:20.921 [2024-07-14 19:03:09.114026] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718801 ] 00:31:20.921 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.179 [2024-07-14 19:03:09.177388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.179 [2024-07-14 19:03:09.268193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.179 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.437 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 [2024-07-14 19:03:09.678811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:21.696 19:03:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:22.261 [2024-07-14 19:03:10.450624] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:22.261 [2024-07-14 19:03:10.450653] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:22.261 [2024-07-14 19:03:10.450684] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:22.519 [2024-07-14 19:03:10.578117] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:22.519 [2024-07-14 19:03:10.641574] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:22.519 [2024-07-14 19:03:10.641600] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:22.776 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.777 19:03:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.034 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.035 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:23.035 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:23.293 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.294 [2024-07-14 19:03:11.347685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.294 [2024-07-14 19:03:11.348399] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:23.294 [2024-07-14 19:03:11.348434] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.294 [2024-07-14 19:03:11.474319] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:23.294 19:03:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:23.553 [2024-07-14 19:03:11.776677] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:23.553 [2024-07-14 19:03:11.776703] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:23.553 [2024-07-14 19:03:11.776714] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 [2024-07-14 19:03:12.572332] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:24.494 [2024-07-14 19:03:12.572367] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.494 [2024-07-14 19:03:12.572417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.494 [2024-07-14 19:03:12.572447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.494 [2024-07-14 19:03:12.572466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.494 [2024-07-14 19:03:12.572481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.494 [2024-07-14 19:03:12.572497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.494 [2024-07-14 19:03:12.572511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.494 [2024-07-14 19:03:12.572526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:24.494 [2024-07-14 19:03:12.572541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:24.494 [2024-07-14 19:03:12.572555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:24.494 [2024-07-14 19:03:12.582418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.494 [2024-07-14 19:03:12.592459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.494 [2024-07-14 19:03:12.592642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.494 [2024-07-14 19:03:12.592673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.494 [2024-07-14 19:03:12.592689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.494 [2024-07-14 19:03:12.592711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.494 [2024-07-14 19:03:12.592732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.494 [2024-07-14 19:03:12.592746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.494 [2024-07-14 19:03:12.592760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.494 [2024-07-14 19:03:12.592779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.494 [2024-07-14 19:03:12.602550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.494 [2024-07-14 19:03:12.602751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.494 [2024-07-14 19:03:12.602779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.494 [2024-07-14 19:03:12.602795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.494 [2024-07-14 19:03:12.602816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.494 [2024-07-14 19:03:12.602836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.494 [2024-07-14 19:03:12.602849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.494 [2024-07-14 19:03:12.602862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.494 [2024-07-14 19:03:12.602891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.494 [2024-07-14 19:03:12.612634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.494 [2024-07-14 19:03:12.612835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.494 [2024-07-14 19:03:12.612883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.494 [2024-07-14 19:03:12.612902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.494 [2024-07-14 19:03:12.612946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.494 [2024-07-14 19:03:12.612968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.494 [2024-07-14 19:03:12.612981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.494 [2024-07-14 19:03:12.612994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.494 [2024-07-14 19:03:12.613012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:24.494 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:24.494 [2024-07-14 19:03:12.622707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.494 [2024-07-14 19:03:12.622933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.494 [2024-07-14 19:03:12.622961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.622978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.622999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.623019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.623033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.623046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.623079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 [2024-07-14 19:03:12.632785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.633010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.633040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.633056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.633078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.633111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.633134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.633148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.633167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.495 [2024-07-14 19:03:12.642883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.643046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.643073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.643088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.643110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.643129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.643142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.643155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.643188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 [2024-07-14 19:03:12.652968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.653096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.653124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.653139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.653160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.653180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.653193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.653206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.653237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:24.495 [2024-07-14 19:03:12.663040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.663220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.663252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.663270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.663294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.663361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.663383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.663398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.663419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.495 [2024-07-14 19:03:12.673114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.673289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.673318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.673335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.673358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.673392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.673411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.673425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.673445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 [2024-07-14 19:03:12.683185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.683367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.683394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.683410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.683431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.683476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.683495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.683508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.683526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 [2024-07-14 19:03:12.693257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:24.495 [2024-07-14 19:03:12.693444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.495 [2024-07-14 19:03:12.693475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d16640 with addr=10.0.0.2, port=4420 00:31:24.495 [2024-07-14 19:03:12.693492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d16640 is same with the state(5) to be set 00:31:24.495 [2024-07-14 19:03:12.693515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d16640 (9): Bad file descriptor 00:31:24.495 [2024-07-14 19:03:12.693552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:24.495 [2024-07-14 19:03:12.693571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:24.495 [2024-07-14 19:03:12.693585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:24.495 [2024-07-14 19:03:12.693606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:24.495 19:03:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:24.495 [2024-07-14 19:03:12.699564] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:24.495 [2024-07-14 19:03:12.699596] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.896 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.897 19:03:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:26.833 [2024-07-14 19:03:14.983071] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:26.833 [2024-07-14 19:03:14.983108] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:26.833 [2024-07-14 19:03:14.983129] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.092 [2024-07-14 19:03:15.069414] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:27.092 [2024-07-14 19:03:15.178789] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:27.092 [2024-07-14 19:03:15.178835] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.092 request: 00:31:27.092 { 00:31:27.092 "name": "nvme", 00:31:27.092 "trtype": "tcp", 00:31:27.092 "traddr": "10.0.0.2", 00:31:27.092 "adrfam": "ipv4", 00:31:27.092 "trsvcid": "8009", 00:31:27.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:27.092 "wait_for_attach": true, 00:31:27.092 "method": "bdev_nvme_start_discovery", 00:31:27.092 "req_id": 1 00:31:27.092 } 00:31:27.092 Got JSON-RPC error response 00:31:27.092 response: 00:31:27.092 { 00:31:27.092 "code": -17, 00:31:27.092 "message": "File exists" 00:31:27.092 } 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.092 request: 00:31:27.092 { 00:31:27.092 "name": "nvme_second", 00:31:27.092 "trtype": "tcp", 00:31:27.092 "traddr": "10.0.0.2", 00:31:27.092 "adrfam": "ipv4", 00:31:27.092 "trsvcid": "8009", 00:31:27.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:27.092 "wait_for_attach": true, 00:31:27.092 "method": "bdev_nvme_start_discovery", 00:31:27.092 "req_id": 1 00:31:27.092 } 00:31:27.092 Got JSON-RPC error response 00:31:27.092 response: 00:31:27.092 { 00:31:27.092 "code": -17, 00:31:27.092 "message": "File exists" 00:31:27.092 } 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:27.092 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:27.093 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.352 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.353 19:03:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.288 [2024-07-14 19:03:16.390336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.288 [2024-07-14 19:03:16.390388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d161e0 with addr=10.0.0.2, port=8010 00:31:28.288 [2024-07-14 19:03:16.390418] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:28.288 [2024-07-14 19:03:16.390432] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:28.288 [2024-07-14 19:03:16.390445] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:29.226 [2024-07-14 19:03:17.392750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.226 [2024-07-14 19:03:17.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d54b00 with addr=10.0.0.2, port=8010 00:31:29.226 [2024-07-14 19:03:17.392868] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:29.226 [2024-07-14 19:03:17.392893] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:29.226 [2024-07-14 19:03:17.392924] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:30.609 [2024-07-14 19:03:18.394904] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:30.609 request: 00:31:30.609 { 00:31:30.609 "name": "nvme_second", 00:31:30.609 "trtype": "tcp", 00:31:30.609 "traddr": "10.0.0.2", 00:31:30.609 "adrfam": "ipv4", 00:31:30.609 "trsvcid": "8010", 00:31:30.609 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.609 "wait_for_attach": false, 00:31:30.609 "attach_timeout_ms": 3000, 00:31:30.609 "method": "bdev_nvme_start_discovery", 00:31:30.609 "req_id": 1 00:31:30.609 } 00:31:30.609 Got JSON-RPC error response 00:31:30.609 response: 00:31:30.609 { 00:31:30.609 "code": -110, 00:31:30.609 "message": "Connection timed out" 00:31:30.609 } 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3718801 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:30.609 rmmod nvme_tcp 00:31:30.609 rmmod nvme_fabrics 00:31:30.609 rmmod nvme_keyring 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3718578 ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3718578 ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3718578' 00:31:30.609 killing process with pid 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3718578 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:30.609 19:03:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:33.146 00:31:33.146 real 0m14.278s 00:31:33.146 user 0m21.173s 00:31:33.146 sys 0m2.997s 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:33.146 ************************************ 00:31:33.146 END TEST nvmf_host_discovery 00:31:33.146 ************************************ 00:31:33.146 19:03:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:33.146 19:03:20 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:33.146 19:03:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:33.146 19:03:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:33.146 19:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:33.146 ************************************ 00:31:33.146 START TEST nvmf_host_multipath_status 00:31:33.146 ************************************ 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:33.146 * Looking for test storage... 00:31:33.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.146 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:33.147 19:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:35.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:35.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.054 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:35.055 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:35.055 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:35.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:31:35.055 00:31:35.055 --- 10.0.0.2 ping statistics --- 00:31:35.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.055 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:31:35.055 00:31:35.055 --- 10.0.0.1 ping statistics --- 00:31:35.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.055 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:35.055 19:03:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3721978 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3721978 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3721978 ']' 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:35.055 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.055 [2024-07-14 19:03:23.047332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:35.055 [2024-07-14 19:03:23.047414] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.055 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.055 [2024-07-14 19:03:23.111020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:35.055 [2024-07-14 19:03:23.194657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.055 [2024-07-14 19:03:23.194710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.055 [2024-07-14 19:03:23.194739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.055 [2024-07-14 19:03:23.194751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.055 [2024-07-14 19:03:23.194761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.055 [2024-07-14 19:03:23.194827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.055 [2024-07-14 19:03:23.194832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3721978 00:31:35.313 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:35.570 [2024-07-14 19:03:23.600332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.570 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:35.828 Malloc0 00:31:35.828 19:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:36.084 19:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.341 19:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.598 [2024-07-14 19:03:24.751189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.598 19:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:36.856 [2024-07-14 19:03:25.044140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3722262 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3722262 /var/tmp/bdevperf.sock 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3722262 ']' 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.856 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.421 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.421 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:37.421 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:37.680 19:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:37.939 Nvme0n1 00:31:38.199 19:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:38.458 Nvme0n1 00:31:38.716 19:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:38.716 19:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:40.618 19:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:40.618 19:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:40.876 19:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:41.134 19:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:42.073 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:42.073 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:42.073 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.073 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:42.331 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.331 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:42.331 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.331 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:42.589 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.589 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:42.589 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.589 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:42.846 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.846 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:42.846 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.846 19:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:43.104 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.104 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:43.104 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.104 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:43.362 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.362 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:43.362 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.362 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:43.625 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.625 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:43.625 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:43.931 19:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:44.191 19:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:45.128 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:45.128 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:45.128 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.128 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:45.386 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.386 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:45.386 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.386 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.642 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.642 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.642 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.642 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.899 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.899 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.899 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.899 19:03:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.156 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.156 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.156 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.156 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.412 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.412 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:46.412 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.412 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.669 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.669 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:46.669 19:03:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.927 19:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:47.184 19:03:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:48.120 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:48.120 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:48.120 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.120 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.378 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.378 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:48.378 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.378 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.636 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.636 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.636 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.636 19:03:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.894 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.894 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.894 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.894 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.151 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.151 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.151 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.151 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.411 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.411 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.411 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.411 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.668 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.668 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:49.668 19:03:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.926 19:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:50.184 19:03:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:51.121 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:51.121 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.121 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.122 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.380 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.380 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:51.380 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.380 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:51.638 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.638 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:51.638 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.638 19:03:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:51.896 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.896 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:51.896 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.896 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.153 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.153 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.153 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.153 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.411 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.411 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:52.411 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.411 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:52.668 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.668 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:52.668 19:03:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:52.925 19:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:53.185 19:03:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:54.137 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:54.137 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:54.137 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.137 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.395 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.395 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.395 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.395 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.652 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.652 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.652 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.652 19:03:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:54.909 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.909 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:54.909 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.909 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.167 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.167 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:55.167 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.167 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.424 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.424 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:55.424 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.424 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.680 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.680 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:55.680 19:03:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:55.937 19:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:56.194 19:03:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.562 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:57.819 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.819 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:57.819 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.819 19:03:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.077 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.077 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.077 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.077 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.335 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.335 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:58.335 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.335 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.593 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.593 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:58.593 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.593 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:58.850 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.850 19:03:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:59.108 19:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:59.108 19:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:59.366 19:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:59.635 19:03:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:00.574 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:00.574 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:00.574 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.574 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.832 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.832 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:00.832 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.832 19:03:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:01.090 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.090 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:01.090 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.090 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:01.348 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.348 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:01.348 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.348 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.606 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.606 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:01.606 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.606 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.865 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.865 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:01.865 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.865 19:03:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:02.123 19:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.123 19:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:02.123 19:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:02.381 19:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:02.639 19:03:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:03.573 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:03.573 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:03.573 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.573 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.832 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.832 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:03.832 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.832 19:03:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:04.090 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.090 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:04.090 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.090 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.348 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.348 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.348 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.348 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.606 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.606 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:04.606 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.606 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.864 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.864 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:04.864 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.864 19:03:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:05.122 19:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.122 19:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:05.122 19:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:05.378 19:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:05.635 19:03:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:06.565 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:06.565 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:06.565 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.565 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.822 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.822 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:06.822 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.822 19:03:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:07.078 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.078 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:07.078 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.078 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.333 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.333 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.333 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.333 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.596 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.597 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:07.597 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.597 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.854 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.854 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:07.854 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.854 19:03:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:08.112 19:03:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:08.112 19:03:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:08.112 19:03:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:08.369 19:03:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:08.628 19:03:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:09.562 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:09.562 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:09.562 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.562 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.820 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.820 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:09.820 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.820 19:03:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:10.078 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.078 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:10.078 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.078 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.335 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.335 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.336 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.336 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.593 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.593 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.593 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.593 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.851 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.851 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:10.851 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.851 19:03:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3722262 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3722262 ']' 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3722262 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3722262 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3722262' 00:32:11.109 killing process with pid 3722262 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3722262 00:32:11.109 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3722262 00:32:11.369 Connection closed with partial response: 00:32:11.369 00:32:11.369 00:32:11.369 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3722262 00:32:11.369 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.369 [2024-07-14 19:03:25.099372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:11.369 [2024-07-14 19:03:25.099453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722262 ] 00:32:11.369 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.369 [2024-07-14 19:03:25.176316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.369 [2024-07-14 19:03:25.273444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.369 Running I/O for 90 seconds... 00:32:11.369 [2024-07-14 19:03:41.085052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.369 [2024-07-14 19:03:41.085111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.085798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.085814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.086982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.086998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.087021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.087037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.087076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.087098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.087115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.087137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.087153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:11.369 [2024-07-14 19:03:41.087176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.369 [2024-07-14 19:03:41.087192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.087964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.087981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.370 [2024-07-14 19:03:41.088406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.370 [2024-07-14 19:03:41.088448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.088970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.088996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.089968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.089997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:11.370 [2024-07-14 19:03:41.090043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.370 [2024-07-14 19:03:41.090065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.090981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.090998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:41.091493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:41.091510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.702490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.702532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.702571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.702979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.702996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.371 [2024-07-14 19:03:56.703224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.703262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.703301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.703343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:11.371 [2024-07-14 19:03:56.703366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.371 [2024-07-14 19:03:56.703383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.703704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.703720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.705581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.705620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.705658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.705697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.705978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.706017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.706056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.706249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.706287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.372 [2024-07-14 19:03:56.706327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.372 [2024-07-14 19:03:56.706601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.372 [2024-07-14 19:03:56.706618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:11.372 Received shutdown signal, test time was about 32.412857 seconds 00:32:11.372 00:32:11.372 Latency(us) 00:32:11.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.372 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:11.372 Verification LBA range: start 0x0 length 0x4000 00:32:11.372 Nvme0n1 : 32.41 7992.34 31.22 0.00 0.00 15989.38 379.26 4026531.84 00:32:11.372 =================================================================================================================== 00:32:11.372 Total : 7992.34 31.22 0.00 0.00 15989.38 379.26 4026531.84 00:32:11.372 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.629 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:11.629 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.629 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.630 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.630 rmmod nvme_tcp 00:32:11.630 rmmod nvme_fabrics 00:32:11.630 rmmod nvme_keyring 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3721978 ']' 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3721978 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3721978 ']' 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3721978 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721978 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721978' 00:32:11.887 killing process with pid 3721978 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3721978 00:32:11.887 19:03:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3721978 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:12.147 19:04:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.051 19:04:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:14.051 00:32:14.051 real 0m41.332s 00:32:14.051 user 2m4.972s 00:32:14.051 sys 0m10.475s 00:32:14.051 19:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:14.051 19:04:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:14.051 ************************************ 00:32:14.051 END TEST nvmf_host_multipath_status 00:32:14.051 ************************************ 00:32:14.051 19:04:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:14.051 19:04:02 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:14.051 19:04:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:14.051 19:04:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.051 19:04:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.051 ************************************ 00:32:14.051 START TEST nvmf_discovery_remove_ifc 00:32:14.051 ************************************ 00:32:14.051 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:14.309 * Looking for test storage... 00:32:14.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.309 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.310 19:04:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:16.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:16.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:16.210 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:16.210 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:16.210 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:16.211 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:16.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:32:16.503 00:32:16.503 --- 10.0.0.2 ping statistics --- 00:32:16.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.503 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:32:16.503 00:32:16.503 --- 10.0.0.1 ping statistics --- 00:32:16.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.503 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3728345 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3728345 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3728345 ']' 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.503 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.503 [2024-07-14 19:04:04.539693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:16.503 [2024-07-14 19:04:04.539769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.503 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.503 [2024-07-14 19:04:04.610027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.503 [2024-07-14 19:04:04.700830] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.503 [2024-07-14 19:04:04.700902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.503 [2024-07-14 19:04:04.700920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.503 [2024-07-14 19:04:04.700943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.503 [2024-07-14 19:04:04.700955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.503 [2024-07-14 19:04:04.700986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.762 [2024-07-14 19:04:04.854242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.762 [2024-07-14 19:04:04.862461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:16.762 null0 00:32:16.762 [2024-07-14 19:04:04.894392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3728485 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3728485 /tmp/host.sock 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3728485 ']' 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:16.762 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.762 19:04:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.762 [2024-07-14 19:04:04.960467] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:16.762 [2024-07-14 19:04:04.960543] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728485 ] 00:32:17.020 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.021 [2024-07-14 19:04:05.023240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.021 [2024-07-14 19:04:05.114338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.021 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.279 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.279 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:17.279 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.279 19:04:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.212 [2024-07-14 19:04:06.332595] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:18.212 [2024-07-14 19:04:06.332622] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:18.212 [2024-07-14 19:04:06.332646] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:18.469 [2024-07-14 19:04:06.460101] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:18.469 [2024-07-14 19:04:06.563899] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:18.469 [2024-07-14 19:04:06.563972] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:18.469 [2024-07-14 19:04:06.564006] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:18.469 [2024-07-14 19:04:06.564027] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:18.469 [2024-07-14 19:04:06.564049] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.469 [2024-07-14 19:04:06.570488] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xabe300 was disconnected and freed. delete nvme_qpair. 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:18.469 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.470 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.727 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:18.727 19:04:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:19.660 19:04:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:20.593 19:04:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:21.963 19:04:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.896 19:04:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.830 19:04:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.830 [2024-07-14 19:04:12.004849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:23.830 [2024-07-14 19:04:12.004940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.830 [2024-07-14 19:04:12.004962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.830 [2024-07-14 19:04:12.004995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.830 [2024-07-14 19:04:12.005008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.830 [2024-07-14 19:04:12.005023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.830 [2024-07-14 19:04:12.005036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.830 [2024-07-14 19:04:12.005050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.830 [2024-07-14 19:04:12.005063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.830 [2024-07-14 19:04:12.005076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.830 [2024-07-14 19:04:12.005089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.830 [2024-07-14 19:04:12.005102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84ce0 is same with the state(5) to be set 00:32:23.830 [2024-07-14 19:04:12.014868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84ce0 (9): Bad file descriptor 00:32:23.831 [2024-07-14 19:04:12.024930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.765 19:04:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.023 [2024-07-14 19:04:13.074922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:25.023 [2024-07-14 19:04:13.074985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84ce0 with addr=10.0.0.2, port=4420 00:32:25.023 [2024-07-14 19:04:13.075011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84ce0 is same with the state(5) to be set 00:32:25.023 [2024-07-14 19:04:13.075056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84ce0 (9): Bad file descriptor 00:32:25.023 [2024-07-14 19:04:13.075529] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:25.023 [2024-07-14 19:04:13.075566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:25.023 [2024-07-14 19:04:13.075584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:25.023 [2024-07-14 19:04:13.075602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:25.023 [2024-07-14 19:04:13.075635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.023 [2024-07-14 19:04:13.075654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:25.023 19:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.023 19:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:25.023 19:04:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.955 [2024-07-14 19:04:14.078178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:25.955 [2024-07-14 19:04:14.078245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:25.955 [2024-07-14 19:04:14.078262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:25.955 [2024-07-14 19:04:14.078278] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:25.955 [2024-07-14 19:04:14.078311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.955 [2024-07-14 19:04:14.078355] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:25.955 [2024-07-14 19:04:14.078408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.955 [2024-07-14 19:04:14.078431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.955 [2024-07-14 19:04:14.078451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.955 [2024-07-14 19:04:14.078466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.955 [2024-07-14 19:04:14.078482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.955 [2024-07-14 19:04:14.078497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.955 [2024-07-14 19:04:14.078513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.955 [2024-07-14 19:04:14.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.955 [2024-07-14 19:04:14.078551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.955 [2024-07-14 19:04:14.078565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.955 [2024-07-14 19:04:14.078580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:25.955 [2024-07-14 19:04:14.078687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84160 (9): Bad file descriptor 00:32:25.955 [2024-07-14 19:04:14.079708] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:25.955 [2024-07-14 19:04:14.079733] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:25.955 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.956 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.956 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.956 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.956 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.213 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.213 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.213 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.213 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:26.213 19:04:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:27.146 19:04:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.078 [2024-07-14 19:04:16.138007] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:28.079 [2024-07-14 19:04:16.138043] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:28.079 [2024-07-14 19:04:16.138066] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:28.079 [2024-07-14 19:04:16.225330] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:28.079 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.336 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:28.336 19:04:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:28.336 [2024-07-14 19:04:16.410816] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:28.336 [2024-07-14 19:04:16.410873] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:28.336 [2024-07-14 19:04:16.410933] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:28.336 [2024-07-14 19:04:16.410955] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:28.337 [2024-07-14 19:04:16.410967] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:28.337 [2024-07-14 19:04:16.416738] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xa73920 was disconnected and freed. delete nvme_qpair. 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3728485 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3728485 ']' 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3728485 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728485 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728485' 00:32:29.270 killing process with pid 3728485 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3728485 00:32:29.270 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3728485 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:29.528 rmmod nvme_tcp 00:32:29.528 rmmod nvme_fabrics 00:32:29.528 rmmod nvme_keyring 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3728345 ']' 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3728345 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3728345 ']' 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3728345 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3728345 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3728345' 00:32:29.528 killing process with pid 3728345 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3728345 00:32:29.528 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3728345 00:32:29.786 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.787 19:04:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.319 19:04:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:32.319 00:32:32.319 real 0m17.752s 00:32:32.319 user 0m25.663s 00:32:32.319 sys 0m3.075s 00:32:32.319 19:04:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:32.319 19:04:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:32.319 ************************************ 00:32:32.319 END TEST nvmf_discovery_remove_ifc 00:32:32.319 ************************************ 00:32:32.319 19:04:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:32.319 19:04:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:32.319 19:04:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:32.319 19:04:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.320 19:04:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.320 ************************************ 00:32:32.320 START TEST nvmf_identify_kernel_target 00:32:32.320 ************************************ 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:32.320 * Looking for test storage... 00:32:32.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:32.320 19:04:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:34.221 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:34.221 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:34.221 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:34.221 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:34.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:32:34.221 00:32:34.221 --- 10.0.0.2 ping statistics --- 00:32:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.221 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:32:34.221 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:32:34.221 00:32:34.221 --- 10.0.0.1 ping statistics --- 00:32:34.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.221 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:34.222 19:04:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.201 Waiting for block devices as requested 00:32:35.460 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:35.460 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.460 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.718 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.719 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.719 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.978 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.978 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.978 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.978 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:36.235 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:36.236 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:36.236 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:36.236 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:36.494 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:36.494 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:36.494 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:36.494 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:36.752 No valid GPT data, bailing 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:36.752 00:32:36.752 Discovery Log Number of Records 2, Generation counter 2 00:32:36.752 =====Discovery Log Entry 0====== 00:32:36.752 trtype: tcp 00:32:36.752 adrfam: ipv4 00:32:36.752 subtype: current discovery subsystem 00:32:36.752 treq: not specified, sq flow control disable supported 00:32:36.752 portid: 1 00:32:36.752 trsvcid: 4420 00:32:36.752 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:36.752 traddr: 10.0.0.1 00:32:36.752 eflags: none 00:32:36.752 sectype: none 00:32:36.752 =====Discovery Log Entry 1====== 00:32:36.752 trtype: tcp 00:32:36.752 adrfam: ipv4 00:32:36.752 subtype: nvme subsystem 00:32:36.752 treq: not specified, sq flow control disable supported 00:32:36.752 portid: 1 00:32:36.752 trsvcid: 4420 00:32:36.752 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:36.752 traddr: 10.0.0.1 00:32:36.752 eflags: none 00:32:36.752 sectype: none 00:32:36.752 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:36.752 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:36.752 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.752 ===================================================== 00:32:36.752 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:36.752 ===================================================== 00:32:36.752 Controller Capabilities/Features 00:32:36.752 ================================ 00:32:36.752 Vendor ID: 0000 00:32:36.752 Subsystem Vendor ID: 0000 00:32:36.752 Serial Number: 821da83f04d2b7dd90cf 00:32:36.752 Model Number: Linux 00:32:36.752 Firmware Version: 6.7.0-68 00:32:36.752 Recommended Arb Burst: 0 00:32:36.752 IEEE OUI Identifier: 00 00 00 00:32:36.752 Multi-path I/O 00:32:36.752 May have multiple subsystem ports: No 00:32:36.752 May have multiple controllers: No 00:32:36.752 Associated with SR-IOV VF: No 00:32:36.753 Max Data Transfer Size: Unlimited 00:32:36.753 Max Number of Namespaces: 0 00:32:36.753 Max Number of I/O Queues: 1024 00:32:36.753 NVMe Specification Version (VS): 1.3 00:32:36.753 NVMe Specification Version (Identify): 1.3 00:32:36.753 Maximum Queue Entries: 1024 00:32:36.753 Contiguous Queues Required: No 00:32:36.753 Arbitration Mechanisms Supported 00:32:36.753 Weighted Round Robin: Not Supported 00:32:36.753 Vendor Specific: Not Supported 00:32:36.753 Reset Timeout: 7500 ms 00:32:36.753 Doorbell Stride: 4 bytes 00:32:36.753 NVM Subsystem Reset: Not Supported 00:32:36.753 Command Sets Supported 00:32:36.753 NVM Command Set: Supported 00:32:36.753 Boot Partition: Not Supported 00:32:36.753 Memory Page Size Minimum: 4096 bytes 00:32:36.753 Memory Page Size Maximum: 4096 bytes 00:32:36.753 Persistent Memory Region: Not Supported 00:32:36.753 Optional Asynchronous Events Supported 00:32:36.753 Namespace Attribute Notices: Not Supported 00:32:36.753 Firmware Activation Notices: Not Supported 00:32:36.753 ANA Change Notices: Not Supported 00:32:36.753 PLE Aggregate Log Change Notices: Not Supported 00:32:36.753 LBA Status Info Alert Notices: Not Supported 00:32:36.753 EGE Aggregate Log Change Notices: Not Supported 00:32:36.753 Normal NVM Subsystem Shutdown event: Not Supported 00:32:36.753 Zone Descriptor Change Notices: Not Supported 00:32:36.753 Discovery Log Change Notices: Supported 00:32:36.753 Controller Attributes 00:32:36.753 128-bit Host Identifier: Not Supported 00:32:36.753 Non-Operational Permissive Mode: Not Supported 00:32:36.753 NVM Sets: Not Supported 00:32:36.753 Read Recovery Levels: Not Supported 00:32:36.753 Endurance Groups: Not Supported 00:32:36.753 Predictable Latency Mode: Not Supported 00:32:36.753 Traffic Based Keep ALive: Not Supported 00:32:36.753 Namespace Granularity: Not Supported 00:32:36.753 SQ Associations: Not Supported 00:32:36.753 UUID List: Not Supported 00:32:36.753 Multi-Domain Subsystem: Not Supported 00:32:36.753 Fixed Capacity Management: Not Supported 00:32:36.753 Variable Capacity Management: Not Supported 00:32:36.753 Delete Endurance Group: Not Supported 00:32:36.753 Delete NVM Set: Not Supported 00:32:36.753 Extended LBA Formats Supported: Not Supported 00:32:36.753 Flexible Data Placement Supported: Not Supported 00:32:36.753 00:32:36.753 Controller Memory Buffer Support 00:32:36.753 ================================ 00:32:36.753 Supported: No 00:32:36.753 00:32:36.753 Persistent Memory Region Support 00:32:36.753 ================================ 00:32:36.753 Supported: No 00:32:36.753 00:32:36.753 Admin Command Set Attributes 00:32:36.753 ============================ 00:32:36.753 Security Send/Receive: Not Supported 00:32:36.753 Format NVM: Not Supported 00:32:36.753 Firmware Activate/Download: Not Supported 00:32:36.753 Namespace Management: Not Supported 00:32:36.753 Device Self-Test: Not Supported 00:32:36.753 Directives: Not Supported 00:32:36.753 NVMe-MI: Not Supported 00:32:36.753 Virtualization Management: Not Supported 00:32:36.753 Doorbell Buffer Config: Not Supported 00:32:36.753 Get LBA Status Capability: Not Supported 00:32:36.753 Command & Feature Lockdown Capability: Not Supported 00:32:36.753 Abort Command Limit: 1 00:32:36.753 Async Event Request Limit: 1 00:32:36.753 Number of Firmware Slots: N/A 00:32:36.753 Firmware Slot 1 Read-Only: N/A 00:32:36.753 Firmware Activation Without Reset: N/A 00:32:36.753 Multiple Update Detection Support: N/A 00:32:36.753 Firmware Update Granularity: No Information Provided 00:32:36.753 Per-Namespace SMART Log: No 00:32:36.753 Asymmetric Namespace Access Log Page: Not Supported 00:32:36.753 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:36.753 Command Effects Log Page: Not Supported 00:32:36.753 Get Log Page Extended Data: Supported 00:32:36.753 Telemetry Log Pages: Not Supported 00:32:36.753 Persistent Event Log Pages: Not Supported 00:32:36.753 Supported Log Pages Log Page: May Support 00:32:36.753 Commands Supported & Effects Log Page: Not Supported 00:32:36.753 Feature Identifiers & Effects Log Page:May Support 00:32:36.753 NVMe-MI Commands & Effects Log Page: May Support 00:32:36.753 Data Area 4 for Telemetry Log: Not Supported 00:32:36.753 Error Log Page Entries Supported: 1 00:32:36.753 Keep Alive: Not Supported 00:32:36.753 00:32:36.753 NVM Command Set Attributes 00:32:36.753 ========================== 00:32:36.753 Submission Queue Entry Size 00:32:36.753 Max: 1 00:32:36.753 Min: 1 00:32:36.753 Completion Queue Entry Size 00:32:36.753 Max: 1 00:32:36.753 Min: 1 00:32:36.753 Number of Namespaces: 0 00:32:36.753 Compare Command: Not Supported 00:32:36.753 Write Uncorrectable Command: Not Supported 00:32:36.753 Dataset Management Command: Not Supported 00:32:36.753 Write Zeroes Command: Not Supported 00:32:36.753 Set Features Save Field: Not Supported 00:32:36.753 Reservations: Not Supported 00:32:36.753 Timestamp: Not Supported 00:32:36.753 Copy: Not Supported 00:32:36.753 Volatile Write Cache: Not Present 00:32:36.753 Atomic Write Unit (Normal): 1 00:32:36.753 Atomic Write Unit (PFail): 1 00:32:36.753 Atomic Compare & Write Unit: 1 00:32:36.753 Fused Compare & Write: Not Supported 00:32:36.753 Scatter-Gather List 00:32:36.753 SGL Command Set: Supported 00:32:36.753 SGL Keyed: Not Supported 00:32:36.753 SGL Bit Bucket Descriptor: Not Supported 00:32:36.753 SGL Metadata Pointer: Not Supported 00:32:36.753 Oversized SGL: Not Supported 00:32:36.753 SGL Metadata Address: Not Supported 00:32:36.753 SGL Offset: Supported 00:32:36.753 Transport SGL Data Block: Not Supported 00:32:36.753 Replay Protected Memory Block: Not Supported 00:32:36.753 00:32:36.753 Firmware Slot Information 00:32:36.753 ========================= 00:32:36.753 Active slot: 0 00:32:36.753 00:32:36.753 00:32:36.753 Error Log 00:32:36.753 ========= 00:32:36.753 00:32:36.753 Active Namespaces 00:32:36.753 ================= 00:32:36.753 Discovery Log Page 00:32:36.753 ================== 00:32:36.753 Generation Counter: 2 00:32:36.753 Number of Records: 2 00:32:36.753 Record Format: 0 00:32:36.753 00:32:36.753 Discovery Log Entry 0 00:32:36.753 ---------------------- 00:32:36.753 Transport Type: 3 (TCP) 00:32:36.753 Address Family: 1 (IPv4) 00:32:36.753 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:36.753 Entry Flags: 00:32:36.753 Duplicate Returned Information: 0 00:32:36.753 Explicit Persistent Connection Support for Discovery: 0 00:32:36.753 Transport Requirements: 00:32:36.753 Secure Channel: Not Specified 00:32:36.753 Port ID: 1 (0x0001) 00:32:36.753 Controller ID: 65535 (0xffff) 00:32:36.753 Admin Max SQ Size: 32 00:32:36.753 Transport Service Identifier: 4420 00:32:36.753 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:36.753 Transport Address: 10.0.0.1 00:32:36.753 Discovery Log Entry 1 00:32:36.753 ---------------------- 00:32:36.753 Transport Type: 3 (TCP) 00:32:36.753 Address Family: 1 (IPv4) 00:32:36.753 Subsystem Type: 2 (NVM Subsystem) 00:32:36.753 Entry Flags: 00:32:36.753 Duplicate Returned Information: 0 00:32:36.753 Explicit Persistent Connection Support for Discovery: 0 00:32:36.753 Transport Requirements: 00:32:36.753 Secure Channel: Not Specified 00:32:36.753 Port ID: 1 (0x0001) 00:32:36.753 Controller ID: 65535 (0xffff) 00:32:36.753 Admin Max SQ Size: 32 00:32:36.753 Transport Service Identifier: 4420 00:32:36.753 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:36.753 Transport Address: 10.0.0.1 00:32:36.753 19:04:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.753 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.013 get_feature(0x01) failed 00:32:37.013 get_feature(0x02) failed 00:32:37.013 get_feature(0x04) failed 00:32:37.013 ===================================================== 00:32:37.013 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:37.013 ===================================================== 00:32:37.013 Controller Capabilities/Features 00:32:37.013 ================================ 00:32:37.013 Vendor ID: 0000 00:32:37.013 Subsystem Vendor ID: 0000 00:32:37.013 Serial Number: f933e3bc90d651a35362 00:32:37.013 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:37.013 Firmware Version: 6.7.0-68 00:32:37.013 Recommended Arb Burst: 6 00:32:37.013 IEEE OUI Identifier: 00 00 00 00:32:37.013 Multi-path I/O 00:32:37.013 May have multiple subsystem ports: Yes 00:32:37.013 May have multiple controllers: Yes 00:32:37.013 Associated with SR-IOV VF: No 00:32:37.013 Max Data Transfer Size: Unlimited 00:32:37.013 Max Number of Namespaces: 1024 00:32:37.013 Max Number of I/O Queues: 128 00:32:37.013 NVMe Specification Version (VS): 1.3 00:32:37.013 NVMe Specification Version (Identify): 1.3 00:32:37.013 Maximum Queue Entries: 1024 00:32:37.013 Contiguous Queues Required: No 00:32:37.013 Arbitration Mechanisms Supported 00:32:37.013 Weighted Round Robin: Not Supported 00:32:37.013 Vendor Specific: Not Supported 00:32:37.013 Reset Timeout: 7500 ms 00:32:37.013 Doorbell Stride: 4 bytes 00:32:37.013 NVM Subsystem Reset: Not Supported 00:32:37.013 Command Sets Supported 00:32:37.013 NVM Command Set: Supported 00:32:37.013 Boot Partition: Not Supported 00:32:37.013 Memory Page Size Minimum: 4096 bytes 00:32:37.013 Memory Page Size Maximum: 4096 bytes 00:32:37.013 Persistent Memory Region: Not Supported 00:32:37.013 Optional Asynchronous Events Supported 00:32:37.013 Namespace Attribute Notices: Supported 00:32:37.013 Firmware Activation Notices: Not Supported 00:32:37.013 ANA Change Notices: Supported 00:32:37.013 PLE Aggregate Log Change Notices: Not Supported 00:32:37.013 LBA Status Info Alert Notices: Not Supported 00:32:37.013 EGE Aggregate Log Change Notices: Not Supported 00:32:37.013 Normal NVM Subsystem Shutdown event: Not Supported 00:32:37.013 Zone Descriptor Change Notices: Not Supported 00:32:37.013 Discovery Log Change Notices: Not Supported 00:32:37.013 Controller Attributes 00:32:37.013 128-bit Host Identifier: Supported 00:32:37.013 Non-Operational Permissive Mode: Not Supported 00:32:37.013 NVM Sets: Not Supported 00:32:37.013 Read Recovery Levels: Not Supported 00:32:37.013 Endurance Groups: Not Supported 00:32:37.013 Predictable Latency Mode: Not Supported 00:32:37.013 Traffic Based Keep ALive: Supported 00:32:37.013 Namespace Granularity: Not Supported 00:32:37.013 SQ Associations: Not Supported 00:32:37.013 UUID List: Not Supported 00:32:37.013 Multi-Domain Subsystem: Not Supported 00:32:37.013 Fixed Capacity Management: Not Supported 00:32:37.013 Variable Capacity Management: Not Supported 00:32:37.013 Delete Endurance Group: Not Supported 00:32:37.013 Delete NVM Set: Not Supported 00:32:37.013 Extended LBA Formats Supported: Not Supported 00:32:37.013 Flexible Data Placement Supported: Not Supported 00:32:37.013 00:32:37.013 Controller Memory Buffer Support 00:32:37.013 ================================ 00:32:37.013 Supported: No 00:32:37.013 00:32:37.013 Persistent Memory Region Support 00:32:37.013 ================================ 00:32:37.013 Supported: No 00:32:37.013 00:32:37.013 Admin Command Set Attributes 00:32:37.013 ============================ 00:32:37.013 Security Send/Receive: Not Supported 00:32:37.013 Format NVM: Not Supported 00:32:37.013 Firmware Activate/Download: Not Supported 00:32:37.013 Namespace Management: Not Supported 00:32:37.013 Device Self-Test: Not Supported 00:32:37.013 Directives: Not Supported 00:32:37.013 NVMe-MI: Not Supported 00:32:37.013 Virtualization Management: Not Supported 00:32:37.013 Doorbell Buffer Config: Not Supported 00:32:37.013 Get LBA Status Capability: Not Supported 00:32:37.013 Command & Feature Lockdown Capability: Not Supported 00:32:37.013 Abort Command Limit: 4 00:32:37.013 Async Event Request Limit: 4 00:32:37.013 Number of Firmware Slots: N/A 00:32:37.013 Firmware Slot 1 Read-Only: N/A 00:32:37.013 Firmware Activation Without Reset: N/A 00:32:37.013 Multiple Update Detection Support: N/A 00:32:37.013 Firmware Update Granularity: No Information Provided 00:32:37.013 Per-Namespace SMART Log: Yes 00:32:37.013 Asymmetric Namespace Access Log Page: Supported 00:32:37.013 ANA Transition Time : 10 sec 00:32:37.013 00:32:37.013 Asymmetric Namespace Access Capabilities 00:32:37.013 ANA Optimized State : Supported 00:32:37.013 ANA Non-Optimized State : Supported 00:32:37.013 ANA Inaccessible State : Supported 00:32:37.013 ANA Persistent Loss State : Supported 00:32:37.013 ANA Change State : Supported 00:32:37.013 ANAGRPID is not changed : No 00:32:37.013 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:37.013 00:32:37.013 ANA Group Identifier Maximum : 128 00:32:37.013 Number of ANA Group Identifiers : 128 00:32:37.013 Max Number of Allowed Namespaces : 1024 00:32:37.013 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:37.013 Command Effects Log Page: Supported 00:32:37.013 Get Log Page Extended Data: Supported 00:32:37.013 Telemetry Log Pages: Not Supported 00:32:37.013 Persistent Event Log Pages: Not Supported 00:32:37.013 Supported Log Pages Log Page: May Support 00:32:37.013 Commands Supported & Effects Log Page: Not Supported 00:32:37.013 Feature Identifiers & Effects Log Page:May Support 00:32:37.013 NVMe-MI Commands & Effects Log Page: May Support 00:32:37.013 Data Area 4 for Telemetry Log: Not Supported 00:32:37.013 Error Log Page Entries Supported: 128 00:32:37.013 Keep Alive: Supported 00:32:37.013 Keep Alive Granularity: 1000 ms 00:32:37.013 00:32:37.013 NVM Command Set Attributes 00:32:37.013 ========================== 00:32:37.013 Submission Queue Entry Size 00:32:37.013 Max: 64 00:32:37.013 Min: 64 00:32:37.013 Completion Queue Entry Size 00:32:37.013 Max: 16 00:32:37.013 Min: 16 00:32:37.013 Number of Namespaces: 1024 00:32:37.013 Compare Command: Not Supported 00:32:37.013 Write Uncorrectable Command: Not Supported 00:32:37.013 Dataset Management Command: Supported 00:32:37.013 Write Zeroes Command: Supported 00:32:37.013 Set Features Save Field: Not Supported 00:32:37.013 Reservations: Not Supported 00:32:37.013 Timestamp: Not Supported 00:32:37.013 Copy: Not Supported 00:32:37.013 Volatile Write Cache: Present 00:32:37.013 Atomic Write Unit (Normal): 1 00:32:37.013 Atomic Write Unit (PFail): 1 00:32:37.013 Atomic Compare & Write Unit: 1 00:32:37.013 Fused Compare & Write: Not Supported 00:32:37.013 Scatter-Gather List 00:32:37.013 SGL Command Set: Supported 00:32:37.013 SGL Keyed: Not Supported 00:32:37.013 SGL Bit Bucket Descriptor: Not Supported 00:32:37.013 SGL Metadata Pointer: Not Supported 00:32:37.013 Oversized SGL: Not Supported 00:32:37.013 SGL Metadata Address: Not Supported 00:32:37.013 SGL Offset: Supported 00:32:37.013 Transport SGL Data Block: Not Supported 00:32:37.013 Replay Protected Memory Block: Not Supported 00:32:37.013 00:32:37.013 Firmware Slot Information 00:32:37.013 ========================= 00:32:37.013 Active slot: 0 00:32:37.013 00:32:37.013 Asymmetric Namespace Access 00:32:37.013 =========================== 00:32:37.013 Change Count : 0 00:32:37.014 Number of ANA Group Descriptors : 1 00:32:37.014 ANA Group Descriptor : 0 00:32:37.014 ANA Group ID : 1 00:32:37.014 Number of NSID Values : 1 00:32:37.014 Change Count : 0 00:32:37.014 ANA State : 1 00:32:37.014 Namespace Identifier : 1 00:32:37.014 00:32:37.014 Commands Supported and Effects 00:32:37.014 ============================== 00:32:37.014 Admin Commands 00:32:37.014 -------------- 00:32:37.014 Get Log Page (02h): Supported 00:32:37.014 Identify (06h): Supported 00:32:37.014 Abort (08h): Supported 00:32:37.014 Set Features (09h): Supported 00:32:37.014 Get Features (0Ah): Supported 00:32:37.014 Asynchronous Event Request (0Ch): Supported 00:32:37.014 Keep Alive (18h): Supported 00:32:37.014 I/O Commands 00:32:37.014 ------------ 00:32:37.014 Flush (00h): Supported 00:32:37.014 Write (01h): Supported LBA-Change 00:32:37.014 Read (02h): Supported 00:32:37.014 Write Zeroes (08h): Supported LBA-Change 00:32:37.014 Dataset Management (09h): Supported 00:32:37.014 00:32:37.014 Error Log 00:32:37.014 ========= 00:32:37.014 Entry: 0 00:32:37.014 Error Count: 0x3 00:32:37.014 Submission Queue Id: 0x0 00:32:37.014 Command Id: 0x5 00:32:37.014 Phase Bit: 0 00:32:37.014 Status Code: 0x2 00:32:37.014 Status Code Type: 0x0 00:32:37.014 Do Not Retry: 1 00:32:37.014 Error Location: 0x28 00:32:37.014 LBA: 0x0 00:32:37.014 Namespace: 0x0 00:32:37.014 Vendor Log Page: 0x0 00:32:37.014 ----------- 00:32:37.014 Entry: 1 00:32:37.014 Error Count: 0x2 00:32:37.014 Submission Queue Id: 0x0 00:32:37.014 Command Id: 0x5 00:32:37.014 Phase Bit: 0 00:32:37.014 Status Code: 0x2 00:32:37.014 Status Code Type: 0x0 00:32:37.014 Do Not Retry: 1 00:32:37.014 Error Location: 0x28 00:32:37.014 LBA: 0x0 00:32:37.014 Namespace: 0x0 00:32:37.014 Vendor Log Page: 0x0 00:32:37.014 ----------- 00:32:37.014 Entry: 2 00:32:37.014 Error Count: 0x1 00:32:37.014 Submission Queue Id: 0x0 00:32:37.014 Command Id: 0x4 00:32:37.014 Phase Bit: 0 00:32:37.014 Status Code: 0x2 00:32:37.014 Status Code Type: 0x0 00:32:37.014 Do Not Retry: 1 00:32:37.014 Error Location: 0x28 00:32:37.014 LBA: 0x0 00:32:37.014 Namespace: 0x0 00:32:37.014 Vendor Log Page: 0x0 00:32:37.014 00:32:37.014 Number of Queues 00:32:37.014 ================ 00:32:37.014 Number of I/O Submission Queues: 128 00:32:37.014 Number of I/O Completion Queues: 128 00:32:37.014 00:32:37.014 ZNS Specific Controller Data 00:32:37.014 ============================ 00:32:37.014 Zone Append Size Limit: 0 00:32:37.014 00:32:37.014 00:32:37.014 Active Namespaces 00:32:37.014 ================= 00:32:37.014 get_feature(0x05) failed 00:32:37.014 Namespace ID:1 00:32:37.014 Command Set Identifier: NVM (00h) 00:32:37.014 Deallocate: Supported 00:32:37.014 Deallocated/Unwritten Error: Not Supported 00:32:37.014 Deallocated Read Value: Unknown 00:32:37.014 Deallocate in Write Zeroes: Not Supported 00:32:37.014 Deallocated Guard Field: 0xFFFF 00:32:37.014 Flush: Supported 00:32:37.014 Reservation: Not Supported 00:32:37.014 Namespace Sharing Capabilities: Multiple Controllers 00:32:37.014 Size (in LBAs): 1953525168 (931GiB) 00:32:37.014 Capacity (in LBAs): 1953525168 (931GiB) 00:32:37.014 Utilization (in LBAs): 1953525168 (931GiB) 00:32:37.014 UUID: ea0fa2e7-6211-4317-80eb-17cf97319035 00:32:37.014 Thin Provisioning: Not Supported 00:32:37.014 Per-NS Atomic Units: Yes 00:32:37.014 Atomic Boundary Size (Normal): 0 00:32:37.014 Atomic Boundary Size (PFail): 0 00:32:37.014 Atomic Boundary Offset: 0 00:32:37.014 NGUID/EUI64 Never Reused: No 00:32:37.014 ANA group ID: 1 00:32:37.014 Namespace Write Protected: No 00:32:37.014 Number of LBA Formats: 1 00:32:37.014 Current LBA Format: LBA Format #00 00:32:37.014 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:37.014 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:37.014 rmmod nvme_tcp 00:32:37.014 rmmod nvme_fabrics 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:37.014 19:04:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.912 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:39.168 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:39.168 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:39.168 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:39.168 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:39.168 19:04:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:40.537 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:40.537 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:40.537 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:41.470 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:41.470 00:32:41.470 real 0m9.535s 00:32:41.470 user 0m2.013s 00:32:41.470 sys 0m3.501s 00:32:41.470 19:04:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:41.470 19:04:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.470 ************************************ 00:32:41.470 END TEST nvmf_identify_kernel_target 00:32:41.470 ************************************ 00:32:41.470 19:04:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:41.470 19:04:29 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:41.470 19:04:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:41.470 19:04:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:41.470 19:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:41.470 ************************************ 00:32:41.470 START TEST nvmf_auth_host 00:32:41.470 ************************************ 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:41.470 * Looking for test storage... 00:32:41.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.470 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:41.471 19:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:43.367 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:43.367 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:43.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:43.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.367 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:43.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:32:43.625 00:32:43.625 --- 10.0.0.2 ping statistics --- 00:32:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.625 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:32:43.625 00:32:43.625 --- 10.0.0.1 ping statistics --- 00:32:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.625 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3735478 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3735478 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3735478 ']' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.625 19:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5485acf8b1430a10cc56066cc23935ed 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RiX 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5485acf8b1430a10cc56066cc23935ed 0 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5485acf8b1430a10cc56066cc23935ed 0 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5485acf8b1430a10cc56066cc23935ed 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:43.883 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RiX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RiX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.RiX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f672d859580ae540f53af2c51d1346eac0c4795ac942412a9f311bd18f5c332 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.B7L 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f672d859580ae540f53af2c51d1346eac0c4795ac942412a9f311bd18f5c332 3 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f672d859580ae540f53af2c51d1346eac0c4795ac942412a9f311bd18f5c332 3 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f672d859580ae540f53af2c51d1346eac0c4795ac942412a9f311bd18f5c332 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.B7L 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.B7L 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.B7L 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca15f0797efa3dfc277ac815e6de58da83088595d15d9613 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.97n 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca15f0797efa3dfc277ac815e6de58da83088595d15d9613 0 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca15f0797efa3dfc277ac815e6de58da83088595d15d9613 0 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca15f0797efa3dfc277ac815e6de58da83088595d15d9613 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.97n 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.97n 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.97n 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b36cbe6c1acdcf422941f5a3ff58e33e43dcd739b64936c1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o8Y 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b36cbe6c1acdcf422941f5a3ff58e33e43dcd739b64936c1 2 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b36cbe6c1acdcf422941f5a3ff58e33e43dcd739b64936c1 2 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b36cbe6c1acdcf422941f5a3ff58e33e43dcd739b64936c1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o8Y 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o8Y 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.o8Y 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b7143b0eb104fdefa2e02d7278576434 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.PDG 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b7143b0eb104fdefa2e02d7278576434 1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b7143b0eb104fdefa2e02d7278576434 1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b7143b0eb104fdefa2e02d7278576434 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:44.141 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.PDG 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.PDG 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PDG 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a75e7450e96350acf5145d4723848d83 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GUF 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a75e7450e96350acf5145d4723848d83 1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a75e7450e96350acf5145d4723848d83 1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a75e7450e96350acf5145d4723848d83 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GUF 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GUF 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GUF 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aa6e4682a501987c7d81790b2da09d7c262fe073c18cae77 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WDa 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aa6e4682a501987c7d81790b2da09d7c262fe073c18cae77 2 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aa6e4682a501987c7d81790b2da09d7c262fe073c18cae77 2 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aa6e4682a501987c7d81790b2da09d7c262fe073c18cae77 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WDa 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WDa 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WDa 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a1fec168f7c6868a1ba86f66c72ac34b 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dah 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a1fec168f7c6868a1ba86f66c72ac34b 0 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a1fec168f7c6868a1ba86f66c72ac34b 0 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a1fec168f7c6868a1ba86f66c72ac34b 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dah 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dah 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dah 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=31305dfb6362e38845409614800c7748b57c5afdcef9de245510a606c2b1bf3f 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AhW 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 31305dfb6362e38845409614800c7748b57c5afdcef9de245510a606c2b1bf3f 3 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 31305dfb6362e38845409614800c7748b57c5afdcef9de245510a606c2b1bf3f 3 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=31305dfb6362e38845409614800c7748b57c5afdcef9de245510a606c2b1bf3f 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AhW 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AhW 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.AhW 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3735478 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3735478 ']' 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.400 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.967 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:44.967 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RiX 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.B7L ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.B7L 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.97n 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.o8Y ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o8Y 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PDG 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GUF ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GUF 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WDa 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dah ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dah 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.AhW 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:44.968 19:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:45.902 Waiting for block devices as requested 00:32:45.902 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:45.902 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:46.160 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:46.160 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:46.160 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:46.417 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:46.417 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:46.417 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:46.675 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:46.675 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:46.675 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:46.675 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:46.934 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:46.934 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:46.934 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:46.934 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:47.192 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:47.451 No valid GPT data, bailing 00:32:47.451 19:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:47.710 00:32:47.710 Discovery Log Number of Records 2, Generation counter 2 00:32:47.710 =====Discovery Log Entry 0====== 00:32:47.710 trtype: tcp 00:32:47.710 adrfam: ipv4 00:32:47.710 subtype: current discovery subsystem 00:32:47.710 treq: not specified, sq flow control disable supported 00:32:47.710 portid: 1 00:32:47.710 trsvcid: 4420 00:32:47.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:47.710 traddr: 10.0.0.1 00:32:47.710 eflags: none 00:32:47.710 sectype: none 00:32:47.710 =====Discovery Log Entry 1====== 00:32:47.710 trtype: tcp 00:32:47.710 adrfam: ipv4 00:32:47.710 subtype: nvme subsystem 00:32:47.710 treq: not specified, sq flow control disable supported 00:32:47.710 portid: 1 00:32:47.710 trsvcid: 4420 00:32:47.710 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:47.710 traddr: 10.0.0.1 00:32:47.710 eflags: none 00:32:47.710 sectype: none 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.710 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.970 nvme0n1 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.970 19:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.970 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 nvme0n1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 nvme0n1 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.229 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.488 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.489 nvme0n1 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.489 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.747 nvme0n1 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.747 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.748 19:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.007 nvme0n1 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.007 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.266 nvme0n1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.266 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.524 nvme0n1 00:32:49.524 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.524 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.524 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.525 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.784 nvme0n1 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.784 19:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.042 nvme0n1 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.042 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.300 nvme0n1 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.300 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.558 nvme0n1 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.558 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.816 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.817 19:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.075 nvme0n1 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.075 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.335 nvme0n1 00:32:51.335 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.336 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.621 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.621 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.879 nvme0n1 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.879 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.880 19:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.137 nvme0n1 00:32:52.137 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.138 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.703 nvme0n1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.703 19:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 nvme0n1 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.267 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.268 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.525 19:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.092 nvme0n1 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.092 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.657 nvme0n1 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.657 19:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.220 nvme0n1 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.220 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.221 19:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.151 nvme0n1 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.151 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.408 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.409 19:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.340 nvme0n1 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.340 19:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.273 nvme0n1 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.273 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.274 19:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.206 nvme0n1 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.206 19:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.578 nvme0n1 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:00.578 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 nvme0n1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.579 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 nvme0n1 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.837 19:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.837 nvme0n1 00:33:00.837 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.837 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.838 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.838 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.838 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.838 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.095 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.096 nvme0n1 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.096 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.353 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.354 nvme0n1 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.354 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.625 nvme0n1 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.625 19:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.626 19:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.626 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.626 19:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.884 nvme0n1 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.884 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.142 nvme0n1 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.142 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.143 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.401 nvme0n1 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.401 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.660 nvme0n1 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.660 19:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.226 nvme0n1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.226 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 nvme0n1 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.742 nvme0n1 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.742 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.743 19:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 nvme0n1 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.308 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.566 nvme0n1 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.566 19:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.131 nvme0n1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.131 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.696 nvme0n1 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.696 19:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.260 nvme0n1 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.260 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 19:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.080 nvme0n1 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.080 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.644 nvme0n1 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:07.644 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.645 19:04:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.603 nvme0n1 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.603 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.604 19:04:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.978 nvme0n1 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.978 19:04:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 nvme0n1 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.913 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.914 19:04:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.849 nvme0n1 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.849 19:04:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.849 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.850 19:05:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.782 nvme0n1 00:33:12.782 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.782 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.782 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.782 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.782 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.039 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.040 nvme0n1 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.040 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.298 nvme0n1 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.298 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.556 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.556 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:13.556 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.557 nvme0n1 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.557 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.815 nvme0n1 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.815 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.816 19:05:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.816 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.074 nvme0n1 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.074 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.333 nvme0n1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.333 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.591 nvme0n1 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:14.591 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.592 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.850 nvme0n1 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.850 19:05:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.850 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.109 nvme0n1 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:15.109 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.110 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.366 nvme0n1 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.367 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.623 nvme0n1 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.623 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.882 19:05:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.142 nvme0n1 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.142 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.401 nvme0n1 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.401 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.402 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.661 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.920 nvme0n1 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.920 19:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.920 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 nvme0n1 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.178 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.743 nvme0n1 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.743 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.002 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.002 19:05:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.002 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.002 19:05:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.002 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.571 nvme0n1 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.571 19:05:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.139 nvme0n1 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.139 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 nvme0n1 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:19.708 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.709 19:05:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 nvme0n1 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQ4NWFjZjhiMTQzMGExMGNjNTYwNjZjYzIzOTM1ZWRozFKE: 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGY2NzJkODU5NTgwYWU1NDBmNTNhZjJjNTFkMTM0NmVhYzBjNDc5NWFjOTQyNDEyYTlmMzExYmQxOGY1YzMzMiHUssk=: 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.276 19:05:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.214 nvme0n1 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.214 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.472 19:05:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 nvme0n1 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjcxNDNiMGViMTA0ZmRlZmEyZTAyZDcyNzg1NzY0MzRkz+0P: 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: ]] 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTc1ZTc0NTBlOTYzNTBhY2Y1MTQ1ZDQ3MjM4NDhkODMoOSOH: 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.405 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.406 19:05:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.340 nvme0n1 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWE2ZTQ2ODJhNTAxOTg3YzdkODE3OTBiMmRhMDlkN2MyNjJmZTA3M2MxOGNhZTc37oF8gw==: 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTFmZWMxNjhmN2M2ODY4YTFiYTg2ZjY2YzcyYWMzNGK3zAP3: 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.340 19:05:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.275 nvme0n1 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzEzMDVkZmI2MzYyZTM4ODQ1NDA5NjE0ODAwYzc3NDhiNTdjNWFmZGNlZjlkZTI0NTUxMGE2MDZjMmIxYmYzZuOqwmw=: 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.275 19:05:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.241 nvme0n1 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:25.241 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2ExNWYwNzk3ZWZhM2RmYzI3N2FjODE1ZTZkZTU4ZGE4MzA4ODU5NWQxNWQ5NjEzuCbDFA==: 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2Y2JlNmMxYWNkY2Y0MjI5NDFmNWEzZmY1OGUzM2U0M2RjZDczOWI2NDkzNmMxgFqhJQ==: 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.500 request: 00:33:25.500 { 00:33:25.500 "name": "nvme0", 00:33:25.500 "trtype": "tcp", 00:33:25.500 "traddr": "10.0.0.1", 00:33:25.500 "adrfam": "ipv4", 00:33:25.500 "trsvcid": "4420", 00:33:25.500 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.500 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.500 "prchk_reftag": false, 00:33:25.500 "prchk_guard": false, 00:33:25.500 "hdgst": false, 00:33:25.500 "ddgst": false, 00:33:25.500 "method": "bdev_nvme_attach_controller", 00:33:25.500 "req_id": 1 00:33:25.500 } 00:33:25.500 Got JSON-RPC error response 00:33:25.500 response: 00:33:25.500 { 00:33:25.500 "code": -5, 00:33:25.500 "message": "Input/output error" 00:33:25.500 } 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.500 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.759 request: 00:33:25.759 { 00:33:25.759 "name": "nvme0", 00:33:25.759 "trtype": "tcp", 00:33:25.759 "traddr": "10.0.0.1", 00:33:25.759 "adrfam": "ipv4", 00:33:25.759 "trsvcid": "4420", 00:33:25.759 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.759 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.759 "prchk_reftag": false, 00:33:25.760 "prchk_guard": false, 00:33:25.760 "hdgst": false, 00:33:25.760 "ddgst": false, 00:33:25.760 "dhchap_key": "key2", 00:33:25.760 "method": "bdev_nvme_attach_controller", 00:33:25.760 "req_id": 1 00:33:25.760 } 00:33:25.760 Got JSON-RPC error response 00:33:25.760 response: 00:33:25.760 { 00:33:25.760 "code": -5, 00:33:25.760 "message": "Input/output error" 00:33:25.760 } 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.760 request: 00:33:25.760 { 00:33:25.760 "name": "nvme0", 00:33:25.760 "trtype": "tcp", 00:33:25.760 "traddr": "10.0.0.1", 00:33:25.760 "adrfam": "ipv4", 00:33:25.760 "trsvcid": "4420", 00:33:25.760 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:25.760 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:25.760 "prchk_reftag": false, 00:33:25.760 "prchk_guard": false, 00:33:25.760 "hdgst": false, 00:33:25.760 "ddgst": false, 00:33:25.760 "dhchap_key": "key1", 00:33:25.760 "dhchap_ctrlr_key": "ckey2", 00:33:25.760 "method": "bdev_nvme_attach_controller", 00:33:25.760 "req_id": 1 00:33:25.760 } 00:33:25.760 Got JSON-RPC error response 00:33:25.760 response: 00:33:25.760 { 00:33:25.760 "code": -5, 00:33:25.760 "message": "Input/output error" 00:33:25.760 } 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:25.760 rmmod nvme_tcp 00:33:25.760 rmmod nvme_fabrics 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3735478 ']' 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3735478 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3735478 ']' 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3735478 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3735478 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3735478' 00:33:25.760 killing process with pid 3735478 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3735478 00:33:25.760 19:05:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3735478 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.019 19:05:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:28.556 19:05:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:29.493 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.493 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:29.493 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:29.494 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:29.494 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:30.433 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:30.692 19:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.RiX /tmp/spdk.key-null.97n /tmp/spdk.key-sha256.PDG /tmp/spdk.key-sha384.WDa /tmp/spdk.key-sha512.AhW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:30.692 19:05:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.652 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.652 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:31.652 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.652 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.652 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.652 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.652 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.652 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.652 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.652 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.652 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.652 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.652 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.652 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.652 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.652 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.652 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.911 00:33:31.911 real 0m50.401s 00:33:31.911 user 0m47.961s 00:33:31.911 sys 0m5.882s 00:33:31.911 19:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:31.911 19:05:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.911 ************************************ 00:33:31.911 END TEST nvmf_auth_host 00:33:31.911 ************************************ 00:33:31.911 19:05:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:31.911 19:05:20 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:31.911 19:05:20 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.911 19:05:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:31.911 19:05:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.911 19:05:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.911 ************************************ 00:33:31.911 START TEST nvmf_digest 00:33:31.911 ************************************ 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.911 * Looking for test storage... 00:33:31.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.911 19:05:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.170 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:32.170 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:32.170 19:05:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:32.170 19:05:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:34.073 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:34.074 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:34.074 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:34.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:34.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:34.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:33:34.074 00:33:34.074 --- 10.0.0.2 ping statistics --- 00:33:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.074 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:33:34.074 00:33:34.074 --- 10.0.0.1 ping statistics --- 00:33:34.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.074 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:34.074 19:05:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:34.335 ************************************ 00:33:34.335 START TEST nvmf_digest_clean 00:33:34.335 ************************************ 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3745003 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3745003 00:33:34.335 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3745003 ']' 00:33:34.336 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.336 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.336 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.336 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.336 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.336 [2024-07-14 19:05:22.375871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:34.336 [2024-07-14 19:05:22.375975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.336 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.336 [2024-07-14 19:05:22.444298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.336 [2024-07-14 19:05:22.542509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.336 [2024-07-14 19:05:22.542563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.336 [2024-07-14 19:05:22.542577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.336 [2024-07-14 19:05:22.542588] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.336 [2024-07-14 19:05:22.542597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.336 [2024-07-14 19:05:22.542624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.594 null0 00:33:34.594 [2024-07-14 19:05:22.726821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.594 [2024-07-14 19:05:22.751065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:34.594 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3745026 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3745026 /var/tmp/bperf.sock 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3745026 ']' 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:34.595 19:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.595 [2024-07-14 19:05:22.801244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:34.595 [2024-07-14 19:05:22.801331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745026 ] 00:33:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.853 [2024-07-14 19:05:22.861794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.853 [2024-07-14 19:05:22.951745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.853 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:34.853 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:34.853 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:34.853 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:34.853 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:35.421 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.421 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.681 nvme0n1 00:33:35.681 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:35.681 19:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.681 Running I/O for 2 seconds... 00:33:38.218 00:33:38.218 Latency(us) 00:33:38.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.218 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:38.218 nvme0n1 : 2.01 18622.31 72.74 0.00 0.00 6863.50 3592.34 15146.10 00:33:38.218 =================================================================================================================== 00:33:38.218 Total : 18622.31 72.74 0.00 0.00 6863.50 3592.34 15146.10 00:33:38.218 0 00:33:38.218 19:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:38.218 19:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:38.218 19:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:38.218 19:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:38.218 19:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:38.218 | select(.opcode=="crc32c") 00:33:38.218 | "\(.module_name) \(.executed)"' 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3745026 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3745026 ']' 00:33:38.218 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3745026 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745026 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745026' 00:33:38.219 killing process with pid 3745026 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3745026 00:33:38.219 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.219 00:33:38.219 Latency(us) 00:33:38.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.219 =================================================================================================================== 00:33:38.219 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3745026 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3745433 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3745433 /var/tmp/bperf.sock 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3745433 ']' 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.219 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:38.219 [2024-07-14 19:05:26.404973] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:38.219 [2024-07-14 19:05:26.405046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745433 ] 00:33:38.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.219 Zero copy mechanism will not be used. 00:33:38.219 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.477 [2024-07-14 19:05:26.467629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.477 [2024-07-14 19:05:26.560378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.477 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:38.477 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:38.477 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:38.477 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:38.477 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:39.044 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.044 19:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.303 nvme0n1 00:33:39.303 19:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:39.303 19:05:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.303 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.303 Zero copy mechanism will not be used. 00:33:39.303 Running I/O for 2 seconds... 00:33:41.839 00:33:41.839 Latency(us) 00:33:41.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.839 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:41.839 nvme0n1 : 2.00 5301.98 662.75 0.00 0.00 3013.52 776.72 5000.15 00:33:41.839 =================================================================================================================== 00:33:41.839 Total : 5301.98 662.75 0.00 0.00 3013.52 776.72 5000.15 00:33:41.839 0 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:41.839 | select(.opcode=="crc32c") 00:33:41.839 | "\(.module_name) \(.executed)"' 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3745433 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3745433 ']' 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3745433 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745433 00:33:41.839 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:41.840 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:41.840 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745433' 00:33:41.840 killing process with pid 3745433 00:33:41.840 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3745433 00:33:41.840 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.840 00:33:41.840 Latency(us) 00:33:41.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.840 =================================================================================================================== 00:33:41.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.840 19:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3745433 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3745953 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3745953 /var/tmp/bperf.sock 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3745953 ']' 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:41.840 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.840 [2024-07-14 19:05:30.053323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:41.840 [2024-07-14 19:05:30.053404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745953 ] 00:33:42.098 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.098 [2024-07-14 19:05:30.114341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.098 [2024-07-14 19:05:30.202628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.098 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.098 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:42.098 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:42.098 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:42.098 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:42.666 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.667 19:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.925 nvme0n1 00:33:42.925 19:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:42.925 19:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.182 Running I/O for 2 seconds... 00:33:45.129 00:33:45.129 Latency(us) 00:33:45.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.129 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.129 nvme0n1 : 2.00 20508.55 80.11 0.00 0.00 6231.35 3203.98 16214.09 00:33:45.129 =================================================================================================================== 00:33:45.129 Total : 20508.55 80.11 0.00 0.00 6231.35 3203.98 16214.09 00:33:45.129 0 00:33:45.129 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:45.129 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:45.129 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:45.129 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:45.129 | select(.opcode=="crc32c") 00:33:45.129 | "\(.module_name) \(.executed)"' 00:33:45.129 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3745953 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3745953 ']' 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3745953 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745953 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745953' 00:33:45.389 killing process with pid 3745953 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3745953 00:33:45.389 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.389 00:33:45.389 Latency(us) 00:33:45.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.389 =================================================================================================================== 00:33:45.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.389 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3745953 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3746369 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3746369 /var/tmp/bperf.sock 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3746369 ']' 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:45.646 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:45.647 [2024-07-14 19:05:33.783713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:45.647 [2024-07-14 19:05:33.783803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746369 ] 00:33:45.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.647 Zero copy mechanism will not be used. 00:33:45.647 EAL: No free 2048 kB hugepages reported on node 1 00:33:45.647 [2024-07-14 19:05:33.843155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.904 [2024-07-14 19:05:33.932624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.904 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:45.904 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:45.904 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:45.904 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:45.904 19:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:46.160 19:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.160 19:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:46.728 nvme0n1 00:33:46.728 19:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:46.728 19:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:46.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:46.728 Zero copy mechanism will not be used. 00:33:46.728 Running I/O for 2 seconds... 00:33:49.265 00:33:49.265 Latency(us) 00:33:49.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:49.265 nvme0n1 : 2.00 5456.24 682.03 0.00 0.00 2924.88 2087.44 5752.60 00:33:49.265 =================================================================================================================== 00:33:49.265 Total : 5456.24 682.03 0.00 0.00 2924.88 2087.44 5752.60 00:33:49.265 0 00:33:49.265 19:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:49.265 19:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:49.265 19:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:49.265 19:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:49.265 | select(.opcode=="crc32c") 00:33:49.265 | "\(.module_name) \(.executed)"' 00:33:49.265 19:05:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3746369 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3746369 ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3746369 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746369 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746369' 00:33:49.265 killing process with pid 3746369 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3746369 00:33:49.265 Received shutdown signal, test time was about 2.000000 seconds 00:33:49.265 00:33:49.265 Latency(us) 00:33:49.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.265 =================================================================================================================== 00:33:49.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3746369 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3745003 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3745003 ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3745003 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745003 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745003' 00:33:49.265 killing process with pid 3745003 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3745003 00:33:49.265 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3745003 00:33:49.524 00:33:49.524 real 0m15.351s 00:33:49.524 user 0m30.612s 00:33:49.524 sys 0m4.150s 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:49.524 ************************************ 00:33:49.524 END TEST nvmf_digest_clean 00:33:49.524 ************************************ 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:49.524 ************************************ 00:33:49.524 START TEST nvmf_digest_error 00:33:49.524 ************************************ 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3746804 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3746804 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3746804 ']' 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.524 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.783 [2024-07-14 19:05:37.778707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:49.783 [2024-07-14 19:05:37.778779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.783 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.783 [2024-07-14 19:05:37.842686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.783 [2024-07-14 19:05:37.933603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.783 [2024-07-14 19:05:37.933667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.783 [2024-07-14 19:05:37.933681] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.783 [2024-07-14 19:05:37.933691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.783 [2024-07-14 19:05:37.933700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.783 [2024-07-14 19:05:37.933739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.783 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.783 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:49.783 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:49.783 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:49.783 19:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.042 [2024-07-14 19:05:38.018365] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.042 null0 00:33:50.042 [2024-07-14 19:05:38.133490] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.042 [2024-07-14 19:05:38.157693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3746942 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3746942 /var/tmp/bperf.sock 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3746942 ']' 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:50.042 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.042 [2024-07-14 19:05:38.205780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:50.042 [2024-07-14 19:05:38.205873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746942 ] 00:33:50.042 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.042 [2024-07-14 19:05:38.267002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.300 [2024-07-14 19:05:38.357731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.300 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.300 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:50.300 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:50.300 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:50.558 19:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.125 nvme0n1 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:51.125 19:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:51.125 Running I/O for 2 seconds... 00:33:51.385 [2024-07-14 19:05:39.374818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.374866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.374894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.390376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.390410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.390428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.401322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.401376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.416336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.416380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.416398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.433466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.433505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.433522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.445382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.445413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.445430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.462662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.462704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.462721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.474481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.474512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.474544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.489507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.385 [2024-07-14 19:05:39.489543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.385 [2024-07-14 19:05:39.489562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.385 [2024-07-14 19:05:39.505005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.505037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.505055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.519438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.519469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.531110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.531139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.531171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.543869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.543928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.543944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.558680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.558708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.558738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.571718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.571753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.571784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.585295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.585340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.585357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.386 [2024-07-14 19:05:39.595638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.386 [2024-07-14 19:05:39.595664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.386 [2024-07-14 19:05:39.595695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.611787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.611817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.611848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.624813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.624844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.624861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.640059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.640091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.640108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.651847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.651897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.651916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.667391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.667449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.682543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.682573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.682590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.697019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.697050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.697066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.708095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.708139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.708154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.722652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.722697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.722715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.735082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.735111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.735144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.749220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.749263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.749279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.761883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.761910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.761941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.774207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.774235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.774265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.786671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.786699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.786730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.799030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.799059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.799099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.811587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.811629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.811646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.823010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.823039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.823070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.836550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.836580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.836597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.850166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.850196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.850212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.646 [2024-07-14 19:05:39.860747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.646 [2024-07-14 19:05:39.860774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.646 [2024-07-14 19:05:39.860804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.905 [2024-07-14 19:05:39.875659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.905 [2024-07-14 19:05:39.875690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.905 [2024-07-14 19:05:39.875707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.905 [2024-07-14 19:05:39.891837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.905 [2024-07-14 19:05:39.891866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.905 [2024-07-14 19:05:39.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.905 [2024-07-14 19:05:39.903069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.905 [2024-07-14 19:05:39.903098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.903130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.918035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.918072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.918090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.929862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.929901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.929919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.943341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.943368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.943399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.954700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.954729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.954760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.967313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.967343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.967360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.980518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.980563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.980580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:39.993328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:39.993358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:39.993374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.007095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.007184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.020276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.020314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.033828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.033858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.033901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.046473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.046504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.046534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.061094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.061126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.061144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.072679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.072724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.088285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.088314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.088347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.099067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.099097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.099113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.114476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.114505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.114536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.906 [2024-07-14 19:05:40.130672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:51.906 [2024-07-14 19:05:40.130703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.906 [2024-07-14 19:05:40.130720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.144999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.145038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.145056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.156230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.156277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.171103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.171134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.171151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.185123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.185153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.185170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.197085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.197115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.197132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.211319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.211363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.211380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.222543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.222603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.235488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.235516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.235546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.247953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.247981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.248012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.261316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.261359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.261376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.272903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.272931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.272968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.285477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.285504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.285535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.299565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.299595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.299611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.312377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.312404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.312434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.325047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.325076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.325093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.336266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.336295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.336312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.349106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.349135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.349152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.361648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.361678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.361702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.373326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.373354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.373386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.167 [2024-07-14 19:05:40.388904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.167 [2024-07-14 19:05:40.388948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.167 [2024-07-14 19:05:40.388964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.428 [2024-07-14 19:05:40.402983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.428 [2024-07-14 19:05:40.403015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.428 [2024-07-14 19:05:40.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.428 [2024-07-14 19:05:40.414450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.428 [2024-07-14 19:05:40.414478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.428 [2024-07-14 19:05:40.414508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.428 [2024-07-14 19:05:40.428248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.428294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.439566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.439594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.439623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.454107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.454135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.454166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.469410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.469437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.469469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.484486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.484554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.496932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.496960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.497001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.511309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.511337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.511367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.522041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.522071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.522104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.536361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.536392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.536409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.552906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.552934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.552966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.567862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.567916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.567933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.579182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.579211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.579242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.592053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.592082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.592113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.605458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.605488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.617589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.617633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.617650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.629922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.629951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.629968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.429 [2024-07-14 19:05:40.641728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.429 [2024-07-14 19:05:40.641755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.429 [2024-07-14 19:05:40.641786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.655486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.655518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.655535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.669929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.669959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.669976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.682002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.682032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.682049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.697248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.697294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.697310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.708024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.708052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.708091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.722095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.722122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.722153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.736114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.736143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.736160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.750458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.750492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.750510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.764133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.764163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.764179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.777058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.777104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.788512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.788545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.801771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.801804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.801822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.817177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.817236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.828522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.828550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.828583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.844273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.844306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.844324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.857349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.857383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.857401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.872894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.872923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.872939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.888220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.888250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.888266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.899742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.690 [2024-07-14 19:05:40.899775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.690 [2024-07-14 19:05:40.899794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.690 [2024-07-14 19:05:40.915428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.915462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.915484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.929188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.929217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.929250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.941913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.941971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.941992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.955129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.955158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.955193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.968921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.968965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.968980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.982036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.950 [2024-07-14 19:05:40.995051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.950 [2024-07-14 19:05:40.995078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.950 [2024-07-14 19:05:40.995107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.009335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.009367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.009385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.021298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.021330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.021349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.037574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.037618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.037635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.054369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.054402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.054421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.067489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.067520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.067552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.081537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.081571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.081589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.094205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.094267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.108177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.108224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.119764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.119797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.119815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.132411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.132457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.132476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.146154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.146182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.146214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.157743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.157776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.157794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:52.951 [2024-07-14 19:05:41.172129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:52.951 [2024-07-14 19:05:41.172161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.951 [2024-07-14 19:05:41.172177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.209 [2024-07-14 19:05:41.184630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.184659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.184689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.200571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.200605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.200624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.217992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.218021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.218038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.231154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.231196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.231215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.243570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.243603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.243621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.257548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.257581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.257599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.270745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.270776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.270792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.284500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.284533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.284551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.297391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.297425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.297449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.311468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.311498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.311515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.325474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.325504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.325520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.337315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.337348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.337367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 [2024-07-14 19:05:41.355012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21a13c0) 00:33:53.210 [2024-07-14 19:05:41.355040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.210 [2024-07-14 19:05:41.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.210 00:33:53.210 Latency(us) 00:33:53.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.210 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:53.210 nvme0n1 : 2.01 18876.50 73.74 0.00 0.00 6772.03 3519.53 21942.42 00:33:53.210 =================================================================================================================== 00:33:53.210 Total : 18876.50 73.74 0.00 0.00 6772.03 3519.53 21942.42 00:33:53.210 0 00:33:53.210 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:53.210 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:53.210 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:53.210 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:53.210 | .driver_specific 00:33:53.210 | .nvme_error 00:33:53.210 | .status_code 00:33:53.210 | .command_transient_transport_error' 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3746942 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3746942 ']' 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3746942 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.468 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746942 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746942' 00:33:53.727 killing process with pid 3746942 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3746942 00:33:53.727 Received shutdown signal, test time was about 2.000000 seconds 00:33:53.727 00:33:53.727 Latency(us) 00:33:53.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.727 =================================================================================================================== 00:33:53.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3746942 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3747356 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3747356 /var/tmp/bperf.sock 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3747356 ']' 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:53.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:53.727 19:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.988 [2024-07-14 19:05:41.972450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:53.988 [2024-07-14 19:05:41.972524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747356 ] 00:33:53.988 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.988 Zero copy mechanism will not be used. 00:33:53.988 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.988 [2024-07-14 19:05:42.030572] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.988 [2024-07-14 19:05:42.119643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.246 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:54.246 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:54.246 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:54.246 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.503 19:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:55.072 nvme0n1 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:55.072 19:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:55.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:55.072 Zero copy mechanism will not be used. 00:33:55.072 Running I/O for 2 seconds... 00:33:55.072 [2024-07-14 19:05:43.146857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.146937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.146958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.154557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.154595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.154615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.162736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.162773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.162793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.170298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.170333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.170353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.177976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.178009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.178027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.184688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.184723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.184743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.192184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.192248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.192270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.199493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.199528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.199547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.206580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.206615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.206634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.212873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.212933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.212951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.219257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.072 [2024-07-14 19:05:43.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.072 [2024-07-14 19:05:43.219312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.072 [2024-07-14 19:05:43.225540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.225576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.225594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.231907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.231954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.231971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.238153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.238185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.238224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.244451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.244487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.244505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.250661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.250696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.250714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.256798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.256833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.256852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.263014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.263059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.263076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.269185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.269235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.269253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.275380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.275415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.275434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.281804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.281838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.281857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.288283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.288318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.288337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.073 [2024-07-14 19:05:43.294527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.073 [2024-07-14 19:05:43.294562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.073 [2024-07-14 19:05:43.294581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.300885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.300934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.332 [2024-07-14 19:05:43.300951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.307167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.307216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.332 [2024-07-14 19:05:43.307235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.313387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.313422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.332 [2024-07-14 19:05:43.313442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.319602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.319636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.332 [2024-07-14 19:05:43.319654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.325668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.325702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.332 [2024-07-14 19:05:43.325722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.332 [2024-07-14 19:05:43.332177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.332 [2024-07-14 19:05:43.332225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.332244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.338340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.338375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.338393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.344577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.344612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.344637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.351075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.351106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.351123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.357502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.357556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.363797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.363850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.370015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.370046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.370062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.374373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.374421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.374441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.379540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.379574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.379593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.385806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.385840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.385859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.392284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.392319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.392337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.398702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.398743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.398763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.405723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.405758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.405777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.411991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.412021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.412051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.418172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.418220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.418239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.424419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.424455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.424474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.430733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.430769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.430788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.436957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.436987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.437018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.443180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.443211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.443245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.449466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.449500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.449518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.455613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.455648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.455666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.461854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.461896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.461916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.468144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.468175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.468192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.474348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.474383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.474402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.333 [2024-07-14 19:05:43.480670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.333 [2024-07-14 19:05:43.480705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.333 [2024-07-14 19:05:43.480724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.487047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.487079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.487095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.493408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.493443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.493461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.499673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.499708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.499727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.505840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.505882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.505923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.512239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.512274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.512293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.518483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.518517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.518535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.524755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.524789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.524808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.531074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.531104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.531136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.537425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.537460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.537478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.543798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.543833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.543851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.550065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.550109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.550127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.334 [2024-07-14 19:05:43.556319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.334 [2024-07-14 19:05:43.556356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.334 [2024-07-14 19:05:43.556375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.562630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.562672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.562692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.568953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.568985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.569003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.575312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.575349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.575368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.581655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.581691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.581710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.587887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.587937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.587954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.593919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.593952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.593969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.600230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.600265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.600284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.606458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.606512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.612523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.612559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.612577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.618757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.618793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.618812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.625047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.625079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.625096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.631380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.631416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.631434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.637767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.637803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.637822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.644226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.644261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.644280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.650454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.650491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.650509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.657074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.657123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.663381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.663417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.663435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.669802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.669837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.669862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.676098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.676143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.676159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.682405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.682440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.682459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.688616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.688651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.688670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.694984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.695048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.701519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.701554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.701573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.594 [2024-07-14 19:05:43.707779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.594 [2024-07-14 19:05:43.707814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.594 [2024-07-14 19:05:43.707833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.714182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.714219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.714238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.720521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.720556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.720575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.726853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.726902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.726936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.733264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.733313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.733332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.739700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.739752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.746403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.746439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.746457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.753465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.753500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.753519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.760101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.760146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.760162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.766389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.766424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.766443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.772609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.772644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.772663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.779047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.779077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.779109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.785437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.785472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.785491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.791738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.791773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.791792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.797972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.798003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.798020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.804348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.804383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.804401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.810500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.810534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.810553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.595 [2024-07-14 19:05:43.816568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.595 [2024-07-14 19:05:43.816602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.595 [2024-07-14 19:05:43.816621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.822724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.822760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.822779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.828981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.829013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.829030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.835358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.835402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.835423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.841610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.841644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.841662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.848028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.848059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.848094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.854291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.854325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.854343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.860605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.860640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.867057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.867088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.867104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.873227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.873262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.873282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.879419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.879450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.879467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.883618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.883675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.883698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.889390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.889424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.889443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.896842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.896890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.896913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.905224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.905260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.905278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.913236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.913269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.913287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.920022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.920054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.920071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.925749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.854 [2024-07-14 19:05:43.925779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.854 [2024-07-14 19:05:43.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.854 [2024-07-14 19:05:43.931564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.931609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.931624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.937741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.937776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.937795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.943963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.944008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.944031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.950101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.950145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.950161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.956311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.956341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.956373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.962346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.962382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.962400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.968657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.968708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.974871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.974913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.974932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.981118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.981167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.981185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.987292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.987327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.987346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:43.993661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:43.993707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:43.993727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.000044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.000081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.000098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.006115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.006145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.006161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.012329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.012362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.012379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.018683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.018716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.018734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.025103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.025135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.025173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.031615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.031650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.031669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.037957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.037989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.044233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.044268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.044287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.050619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.050653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.056776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.056810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.056829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.063190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.063220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.063254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.069469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.069504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.855 [2024-07-14 19:05:44.075824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:55.855 [2024-07-14 19:05:44.075859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.855 [2024-07-14 19:05:44.075885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.082432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.082468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.082487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.088811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.088846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.088865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.094849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.094894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.094915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.101303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.101338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.101357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.107404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.107438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.107462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.113594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.113629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.113647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.119714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.119748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.119767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.126104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.126149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.126167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.132198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.132246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.138425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.138460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.138478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.113 [2024-07-14 19:05:44.144716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.113 [2024-07-14 19:05:44.144751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.113 [2024-07-14 19:05:44.144769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.151022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.151052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.151085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.157270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.157305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.157324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.163987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.164020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.164037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.170184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.170214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.170245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.176311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.176345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.176363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.182562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.182597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.182616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.188646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.188698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.194787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.194821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.194840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.200925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.200954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.200970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.207228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.207282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.213517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.213551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.213576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.219767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.219802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.219821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.226062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.226106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.226121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.232260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.232295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.232314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.238408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.238443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.238461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.244618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.244653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.244671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.250953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.251000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.251017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.257272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.257306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.257325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.263374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.263409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.263428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.269690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.269731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.269750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.275830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.275865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.275893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.282081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.282112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.288379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.288415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.288434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.294529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.294563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.294581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.300823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.300858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.300884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.306967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.306996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.307027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.313198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.313232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.114 [2024-07-14 19:05:44.313251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.114 [2024-07-14 19:05:44.319316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.114 [2024-07-14 19:05:44.319351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.115 [2024-07-14 19:05:44.319370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.115 [2024-07-14 19:05:44.325429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.115 [2024-07-14 19:05:44.325464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.115 [2024-07-14 19:05:44.325482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.115 [2024-07-14 19:05:44.331725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.115 [2024-07-14 19:05:44.331760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.115 [2024-07-14 19:05:44.331779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.115 [2024-07-14 19:05:44.338077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.115 [2024-07-14 19:05:44.338109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.115 [2024-07-14 19:05:44.338125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.344502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.344538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.344556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.350712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.350747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.350767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.356886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.356934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.356950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.363098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.363160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.369286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.369321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.369339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.375595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.375630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.375655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.381923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.381954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.381972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.388401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.388436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.388455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.394579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.394632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.401036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.401065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.401095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.407447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.407481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.407500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.414492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.414528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.414547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.421387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.421423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.421442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.428423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.428459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.428477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.435589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.435625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.373 [2024-07-14 19:05:44.435644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.373 [2024-07-14 19:05:44.442664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.373 [2024-07-14 19:05:44.442699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.442718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.449836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.449871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.449899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.457275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.457310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.457329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.465310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.465364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.465388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.472208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.472244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.472263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.478997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.479029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.479046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.485781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.485816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.485835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.492768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.492816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.492843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.499572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.499606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.499625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.506660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.506695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.506714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.513286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.513321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.513340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.519762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.519796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.523862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.523904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.529716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.529751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.529769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.536087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.536118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.536135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.542447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.542482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.548771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.548811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.548831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.555195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.561496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.561532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.561550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.567948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.568012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.574243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.574279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.574297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.580457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.580492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.580511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.586730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.586765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.586783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.374 [2024-07-14 19:05:44.593014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.374 [2024-07-14 19:05:44.593060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.374 [2024-07-14 19:05:44.593077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.599454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.599489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.599507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.605897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.605945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.612140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.612188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.612204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.618403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.618457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.624706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.624741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.624759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.631075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.631123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.631139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.637347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.637382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.637419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.643492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.643527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.643545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.649725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.649759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.649777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.656018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.656048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.656073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.662116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.662146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.633 [2024-07-14 19:05:44.662162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.633 [2024-07-14 19:05:44.669052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.633 [2024-07-14 19:05:44.669099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.669115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.675355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.675390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.681560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.681594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.681613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.687838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.687873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.687902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.694241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.694277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.694295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.700457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.700492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.700511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.706667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.706702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.713151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.713213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.713230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.719511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.719547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.719565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.725835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.725887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.725922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.732390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.732425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.732445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.738642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.738678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.738697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.744800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.744835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.744854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.751051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.751081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.751112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.757514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.757550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.757569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.764013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.764042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.764076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.770378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.770414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.770433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.776744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.776780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.776798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.783112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.783157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.789452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.789506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.795829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.795864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.795893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.802184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.802234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.802253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.808418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.808452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.808471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.814609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.814645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.814663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.820722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.820763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.820783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.827022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.827069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.827085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.833370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.833404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.833423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.839628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.839681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.845961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.846007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.634 [2024-07-14 19:05:44.846023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.634 [2024-07-14 19:05:44.852433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.634 [2024-07-14 19:05:44.852468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.635 [2024-07-14 19:05:44.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.859028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.859060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.859077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.865295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.865330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.865349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.871508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.871543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.871561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.877700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.877735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.877753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.883934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.883965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.883982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.894 [2024-07-14 19:05:44.890144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.894 [2024-07-14 19:05:44.890191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.894 [2024-07-14 19:05:44.890207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.896360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.896395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.896413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.902491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.908731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.908766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.908784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.914968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.914998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.915031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.921887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.921936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.928127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.928175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.928196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.934416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.934451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.940736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.940789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.946818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.946853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.946871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.953089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.953120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.959465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.959500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.959518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.965781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.965816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.965835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.972081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.972112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.972129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.978427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.978462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.978481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.984073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.984110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.984127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.990286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.990321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.990339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:44.996535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:44.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:44.996590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.002700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.002734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.002753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.009046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.009078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.009095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.015943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.022687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.022722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.022741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.028994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.029042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.029059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.035264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.035299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.035318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.041590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.041625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.041644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.047717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.047753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.054114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.054145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.054162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.060365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.060400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.060419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.066622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.895 [2024-07-14 19:05:45.066657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.895 [2024-07-14 19:05:45.066676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.895 [2024-07-14 19:05:45.072857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.072899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.072934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.079271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.079305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.085652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.085688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.085707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.091895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.091942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.098228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.098263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.098282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.104581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.104616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.104635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.111100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.111132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.111149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.896 [2024-07-14 19:05:45.117444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:56.896 [2024-07-14 19:05:45.117480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.896 [2024-07-14 19:05:45.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.154 [2024-07-14 19:05:45.123655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:57.154 [2024-07-14 19:05:45.123690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.154 [2024-07-14 19:05:45.123709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.154 [2024-07-14 19:05:45.130051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:57.154 [2024-07-14 19:05:45.130084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.154 [2024-07-14 19:05:45.130101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.154 [2024-07-14 19:05:45.136297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:57.154 [2024-07-14 19:05:45.136332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.154 [2024-07-14 19:05:45.136351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.154 [2024-07-14 19:05:45.141874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbcf10) 00:33:57.154 [2024-07-14 19:05:45.141931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.154 [2024-07-14 19:05:45.141948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.154 00:33:57.154 Latency(us) 00:33:57.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.154 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:57.155 nvme0n1 : 2.00 4882.07 610.26 0.00 0.00 3271.81 904.15 8738.13 00:33:57.155 =================================================================================================================== 00:33:57.155 Total : 4882.07 610.26 0.00 0.00 3271.81 904.15 8738.13 00:33:57.155 0 00:33:57.155 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:57.155 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:57.155 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:57.155 | .driver_specific 00:33:57.155 | .nvme_error 00:33:57.155 | .status_code 00:33:57.155 | .command_transient_transport_error' 00:33:57.155 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 315 > 0 )) 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3747356 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3747356 ']' 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3747356 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747356 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747356' 00:33:57.414 killing process with pid 3747356 00:33:57.414 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3747356 00:33:57.414 Received shutdown signal, test time was about 2.000000 seconds 00:33:57.414 00:33:57.414 Latency(us) 00:33:57.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.414 =================================================================================================================== 00:33:57.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:57.415 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3747356 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3747762 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3747762 /var/tmp/bperf.sock 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3747762 ']' 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:57.674 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.674 [2024-07-14 19:05:45.704870] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:57.674 [2024-07-14 19:05:45.704953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747762 ] 00:33:57.674 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.674 [2024-07-14 19:05:45.766615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.674 [2024-07-14 19:05:45.855256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.932 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.932 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:57.932 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.932 19:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.191 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.449 nvme0n1 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:58.449 19:05:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:58.707 Running I/O for 2 seconds... 00:33:58.707 [2024-07-14 19:05:46.780901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ee5c8 00:33:58.707 [2024-07-14 19:05:46.781944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.781983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.794424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fb480 00:33:58.707 [2024-07-14 19:05:46.795612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.795645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.808054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f35f0 00:33:58.707 [2024-07-14 19:05:46.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.809449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.820096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e0630 00:33:58.707 [2024-07-14 19:05:46.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.821471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.833460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e84c0 00:33:58.707 [2024-07-14 19:05:46.834964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.834992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.846751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190feb58 00:33:58.707 [2024-07-14 19:05:46.848426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.707 [2024-07-14 19:05:46.848457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:58.707 [2024-07-14 19:05:46.860036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f0350 00:33:58.707 [2024-07-14 19:05:46.861865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.861904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:58.708 [2024-07-14 19:05:46.873283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e1f80 00:33:58.708 [2024-07-14 19:05:46.875264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.875295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:58.708 [2024-07-14 19:05:46.882269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dece0 00:33:58.708 [2024-07-14 19:05:46.883118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.883146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:58.708 [2024-07-14 19:05:46.894296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f2510 00:33:58.708 [2024-07-14 19:05:46.895125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.895152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:58.708 [2024-07-14 19:05:46.907516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc998 00:33:58.708 [2024-07-14 19:05:46.908498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.908529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:58.708 [2024-07-14 19:05:46.921536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5ec8 00:33:58.708 [2024-07-14 19:05:46.922717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.708 [2024-07-14 19:05:46.922748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:58.968 [2024-07-14 19:05:46.935855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6fa8 00:33:58.969 [2024-07-14 19:05:46.937714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.937746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:46.949127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f9f68 00:33:58.969 [2024-07-14 19:05:46.951115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.951142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:46.958119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eb760 00:33:58.969 [2024-07-14 19:05:46.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.958996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:46.971469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6fa8 00:33:58.969 [2024-07-14 19:05:46.972469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:46.983557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ee190 00:33:58.969 [2024-07-14 19:05:46.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.984601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:46.996868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e4140 00:33:58.969 [2024-07-14 19:05:46.998038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:46.998080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.010239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f2948 00:33:58.969 [2024-07-14 19:05:47.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.011595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.023538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f4298 00:33:58.969 [2024-07-14 19:05:47.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.025096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.036837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fe720 00:33:58.969 [2024-07-14 19:05:47.038534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.038570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.050170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e9e10 00:33:58.969 [2024-07-14 19:05:47.052053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.052083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.063500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fa3a0 00:33:58.969 [2024-07-14 19:05:47.065524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.072553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e01f8 00:33:58.969 [2024-07-14 19:05:47.073400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.073431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.085488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e99d8 00:33:58.969 [2024-07-14 19:05:47.086315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.086346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.098656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fbcf0 00:33:58.969 [2024-07-14 19:05:47.099673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.099704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.113091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6300 00:33:58.969 [2024-07-14 19:05:47.114725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.114756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.126325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f7538 00:33:58.969 [2024-07-14 19:05:47.128138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.128165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.139440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e27f0 00:33:58.969 [2024-07-14 19:05:47.141402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.141434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.148341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f5be8 00:33:58.969 [2024-07-14 19:05:47.149131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.149173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.161670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190efae0 00:33:58.969 [2024-07-14 19:05:47.162657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.173643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6738 00:33:58.969 [2024-07-14 19:05:47.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.174665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:58.969 [2024-07-14 19:05:47.186573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5658 00:33:58.969 [2024-07-14 19:05:47.187543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.969 [2024-07-14 19:05:47.187574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.228 [2024-07-14 19:05:47.199764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ef270 00:33:59.228 [2024-07-14 19:05:47.200722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.228 [2024-07-14 19:05:47.200753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.228 [2024-07-14 19:05:47.212890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fb8b8 00:33:59.228 [2024-07-14 19:05:47.214048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.228 [2024-07-14 19:05:47.214075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.228 [2024-07-14 19:05:47.224844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190df550 00:33:59.228 [2024-07-14 19:05:47.225966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.228 [2024-07-14 19:05:47.226007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.228 [2024-07-14 19:05:47.238063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:33:59.228 [2024-07-14 19:05:47.239366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.228 [2024-07-14 19:05:47.239397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.252142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e88f8 00:33:59.229 [2024-07-14 19:05:47.253654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.253685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.263981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e8088 00:33:59.229 [2024-07-14 19:05:47.265454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.265483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.277274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc998 00:33:59.229 [2024-07-14 19:05:47.278882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.278925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.289080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190edd58 00:33:59.229 [2024-07-14 19:05:47.290376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.290408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.301946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f9f68 00:33:59.229 [2024-07-14 19:05:47.302933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.302962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.315161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f0ff8 00:33:59.229 [2024-07-14 19:05:47.316339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.316370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.327110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f4f40 00:33:59.229 [2024-07-14 19:05:47.329066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.329093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.338800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ef6a8 00:33:59.229 [2024-07-14 19:05:47.339805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.339841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.351937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f2948 00:33:59.229 [2024-07-14 19:05:47.353056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.353084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.363926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f81e0 00:33:59.229 [2024-07-14 19:05:47.365077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.365104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.377185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:33:59.229 [2024-07-14 19:05:47.378492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.390473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f57b0 00:33:59.229 [2024-07-14 19:05:47.391934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.391961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.403690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f9f68 00:33:59.229 [2024-07-14 19:05:47.405337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.405369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.417031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fda78 00:33:59.229 [2024-07-14 19:05:47.418823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.418855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.430195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190df550 00:33:59.229 [2024-07-14 19:05:47.432178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.432206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.443400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fb048 00:33:59.229 [2024-07-14 19:05:47.445523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.445554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.229 [2024-07-14 19:05:47.452350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e27f0 00:33:59.229 [2024-07-14 19:05:47.453356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.229 [2024-07-14 19:05:47.453387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.464458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f3a28 00:33:59.489 [2024-07-14 19:05:47.465412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.465443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.477753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ed4e8 00:33:59.489 [2024-07-14 19:05:47.478891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.478945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.491946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f9f68 00:33:59.489 [2024-07-14 19:05:47.493254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.493285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.505026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e4140 00:33:59.489 [2024-07-14 19:05:47.506521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.506551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.517075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f96f8 00:33:59.489 [2024-07-14 19:05:47.518561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.518591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.530331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e3060 00:33:59.489 [2024-07-14 19:05:47.531953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.489 [2024-07-14 19:05:47.531994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.489 [2024-07-14 19:05:47.542110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f31b8 00:33:59.490 [2024-07-14 19:05:47.543273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.543305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.554935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e4578 00:33:59.490 [2024-07-14 19:05:47.555844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.555884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.568194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ea248 00:33:59.490 [2024-07-14 19:05:47.569350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.569381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.580117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e8088 00:33:59.490 [2024-07-14 19:05:47.582113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.582141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.591009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f46d0 00:33:59.490 [2024-07-14 19:05:47.592000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.592026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.604324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e27f0 00:33:59.490 [2024-07-14 19:05:47.605460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.605491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.617674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:33:59.490 [2024-07-14 19:05:47.619061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.619089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.631863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ff3c8 00:33:59.490 [2024-07-14 19:05:47.633376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.633407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.645018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:33:59.490 [2024-07-14 19:05:47.646685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.646715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.657007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f0788 00:33:59.490 [2024-07-14 19:05:47.658620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.658650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.670302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ec408 00:33:59.490 [2024-07-14 19:05:47.672093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.672142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.682103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f6890 00:33:59.490 [2024-07-14 19:05:47.683410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.683442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.695012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6b70 00:33:59.490 [2024-07-14 19:05:47.696141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.696169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.490 [2024-07-14 19:05:47.709485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f1430 00:33:59.490 [2024-07-14 19:05:47.711643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.490 [2024-07-14 19:05:47.711674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.807 [2024-07-14 19:05:47.718563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e9e10 00:33:59.807 [2024-07-14 19:05:47.719555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.719583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.731010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f1ca0 00:33:59.808 [2024-07-14 19:05:47.732129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.732157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.743867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc128 00:33:59.808 [2024-07-14 19:05:47.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.745074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.756925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc998 00:33:59.808 [2024-07-14 19:05:47.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.758238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.770186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e7818 00:33:59.808 [2024-07-14 19:05:47.771660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.771690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.782027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dece0 00:33:59.808 [2024-07-14 19:05:47.784038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.784072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.792982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e1710 00:33:59.808 [2024-07-14 19:05:47.793972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.793999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.806285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fef90 00:33:59.808 [2024-07-14 19:05:47.807600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.807632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.819789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e01f8 00:33:59.808 [2024-07-14 19:05:47.821079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.821107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.832997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5220 00:33:59.808 [2024-07-14 19:05:47.834458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.834489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.846173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5a90 00:33:59.808 [2024-07-14 19:05:47.847801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.859452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fdeb0 00:33:59.808 [2024-07-14 19:05:47.861255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.861285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.871227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f46d0 00:33:59.808 [2024-07-14 19:05:47.872511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.884051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f7970 00:33:59.808 [2024-07-14 19:05:47.885155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.885199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.895956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6300 00:33:59.808 [2024-07-14 19:05:47.897942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.897969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.909847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e3498 00:33:59.808 [2024-07-14 19:05:47.911436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.920235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e84c0 00:33:59.808 [2024-07-14 19:05:47.921178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.921204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.934340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e2c28 00:33:59.808 [2024-07-14 19:05:47.935906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.935947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.947637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dece0 00:33:59.808 [2024-07-14 19:05:47.949395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.949426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.957748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f8e88 00:33:59.808 [2024-07-14 19:05:47.958836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.958866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.971522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ea680 00:33:59.808 [2024-07-14 19:05:47.972449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.972480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.986006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e3060 00:33:59.808 [2024-07-14 19:05:47.987942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.987983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:47.996180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e99d8 00:33:59.808 [2024-07-14 19:05:47.997416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:47.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:59.808 [2024-07-14 19:05:48.009529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:33:59.808 [2024-07-14 19:05:48.010937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.808 [2024-07-14 19:05:48.010982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.022848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f3e60 00:34:00.070 [2024-07-14 19:05:48.024412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.024443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.033005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5658 00:34:00.070 [2024-07-14 19:05:48.033873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.033909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.047160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc560 00:34:00.070 [2024-07-14 19:05:48.048260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.060395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e1710 00:34:00.070 [2024-07-14 19:05:48.061635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.061667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.073565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dece0 00:34:00.070 [2024-07-14 19:05:48.074604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.074635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.085533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fda78 00:34:00.070 [2024-07-14 19:05:48.087467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.087498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.099619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f20d8 00:34:00.070 [2024-07-14 19:05:48.101174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.101201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.110408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ed4e8 00:34:00.070 [2024-07-14 19:05:48.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.111106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.124075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eff18 00:34:00.070 [2024-07-14 19:05:48.125591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.125623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.137195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6738 00:34:00.070 [2024-07-14 19:05:48.139091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.139120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.150470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fc128 00:34:00.070 [2024-07-14 19:05:48.152510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.152541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.160631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f5be8 00:34:00.070 [2024-07-14 19:05:48.162024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.162052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.173874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eb328 00:34:00.070 [2024-07-14 19:05:48.175427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.175458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.187088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e8d30 00:34:00.070 [2024-07-14 19:05:48.188797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.200377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e12d8 00:34:00.070 [2024-07-14 19:05:48.202303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.202334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.213646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fd640 00:34:00.070 [2024-07-14 19:05:48.215711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.215741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.223826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e4de8 00:34:00.070 [2024-07-14 19:05:48.225184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.225210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.237121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f92c0 00:34:00.070 [2024-07-14 19:05:48.238670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.238701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.250362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190df988 00:34:00.070 [2024-07-14 19:05:48.252043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.252070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.262236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fe2e8 00:34:00.070 [2024-07-14 19:05:48.263404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.263434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.275060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eff18 00:34:00.070 [2024-07-14 19:05:48.276067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.276095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.070 [2024-07-14 19:05:48.286965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f7538 00:34:00.070 [2024-07-14 19:05:48.288777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.070 [2024-07-14 19:05:48.288807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.297999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e8d30 00:34:00.328 [2024-07-14 19:05:48.298862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.298903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.311352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f4298 00:34:00.328 [2024-07-14 19:05:48.312346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.312378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.324606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190de8a8 00:34:00.328 [2024-07-14 19:05:48.325804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.325835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.337812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5ec8 00:34:00.328 [2024-07-14 19:05:48.339146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.339173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.351091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f1ca0 00:34:00.328 [2024-07-14 19:05:48.352598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.352629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.364377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f4298 00:34:00.328 [2024-07-14 19:05:48.366057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.366084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.377620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ef6a8 00:34:00.328 [2024-07-14 19:05:48.379459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.379490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.390938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ff3c8 00:34:00.328 [2024-07-14 19:05:48.392954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.392994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.399973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f6cc8 00:34:00.328 [2024-07-14 19:05:48.400826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.411864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f4b08 00:34:00.328 [2024-07-14 19:05:48.412680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.412710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.425108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ebfd0 00:34:00.328 [2024-07-14 19:05:48.426122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.426149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.438319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f35f0 00:34:00.328 [2024-07-14 19:05:48.439470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.439505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.451590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ecc78 00:34:00.328 [2024-07-14 19:05:48.452935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.452978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.464871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f92c0 00:34:00.328 [2024-07-14 19:05:48.466394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.466425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.478138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ebfd0 00:34:00.328 [2024-07-14 19:05:48.479827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.479858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.491418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190dfdc0 00:34:00.328 [2024-07-14 19:05:48.493234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.328 [2024-07-14 19:05:48.493265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.328 [2024-07-14 19:05:48.503239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eaef0 00:34:00.329 [2024-07-14 19:05:48.504564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.329 [2024-07-14 19:05:48.504594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.329 [2024-07-14 19:05:48.514800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e0ea0 00:34:00.329 [2024-07-14 19:05:48.516641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.329 [2024-07-14 19:05:48.516671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:00.329 [2024-07-14 19:05:48.525599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fa7d8 00:34:00.329 [2024-07-14 19:05:48.526445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.329 [2024-07-14 19:05:48.526475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:00.329 [2024-07-14 19:05:48.538864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f1430 00:34:00.329 [2024-07-14 19:05:48.539884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.329 [2024-07-14 19:05:48.539929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.329 [2024-07-14 19:05:48.553024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e5658 00:34:00.329 [2024-07-14 19:05:48.554281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.329 [2024-07-14 19:05:48.554327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.566345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f8a50 00:34:00.586 [2024-07-14 19:05:48.567668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.567700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.579549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e49b0 00:34:00.586 [2024-07-14 19:05:48.581083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.581110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.590435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ef270 00:34:00.586 [2024-07-14 19:05:48.591076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.591104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.604965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f1430 00:34:00.586 [2024-07-14 19:05:48.606639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.606670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.616755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f7da8 00:34:00.586 [2024-07-14 19:05:48.617952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.617979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.630863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e6738 00:34:00.586 [2024-07-14 19:05:48.632730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.586 [2024-07-14 19:05:48.632760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.586 [2024-07-14 19:05:48.644133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190efae0 00:34:00.587 [2024-07-14 19:05:48.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.646209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.653140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e23b8 00:34:00.587 [2024-07-14 19:05:48.653990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.654016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.666448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f7da8 00:34:00.587 [2024-07-14 19:05:48.667462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.667492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.678452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f6458 00:34:00.587 [2024-07-14 19:05:48.679444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.679475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.691761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190fa3a0 00:34:00.587 [2024-07-14 19:05:48.692944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.692971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.704997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f5378 00:34:00.587 [2024-07-14 19:05:48.706321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.706352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.718231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190ee5c8 00:34:00.587 [2024-07-14 19:05:48.719716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.719746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.731462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f6458 00:34:00.587 [2024-07-14 19:05:48.733151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.733179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.744671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190e12d8 00:34:00.587 [2024-07-14 19:05:48.746526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.746556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.757946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190eaef0 00:34:00.587 [2024-07-14 19:05:48.760055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.760082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:00.587 [2024-07-14 19:05:48.767005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6c40) with pdu=0x2000190f8a50 00:34:00.587 [2024-07-14 19:05:48.767823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.587 [2024-07-14 19:05:48.767858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:00.587 00:34:00.587 Latency(us) 00:34:00.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.587 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.587 nvme0n1 : 2.01 20154.97 78.73 0.00 0.00 6343.49 2669.99 15728.64 00:34:00.587 =================================================================================================================== 00:34:00.587 Total : 20154.97 78.73 0.00 0.00 6343.49 2669.99 15728.64 00:34:00.587 0 00:34:00.587 19:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.587 19:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.587 19:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.587 19:05:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.587 | .driver_specific 00:34:00.587 | .nvme_error 00:34:00.587 | .status_code 00:34:00.587 | .command_transient_transport_error' 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3747762 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3747762 ']' 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3747762 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747762 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747762' 00:34:00.845 killing process with pid 3747762 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3747762 00:34:00.845 Received shutdown signal, test time was about 2.000000 seconds 00:34:00.845 00:34:00.845 Latency(us) 00:34:00.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.845 =================================================================================================================== 00:34:00.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.845 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3747762 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3748195 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3748195 /var/tmp/bperf.sock 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3748195 ']' 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:01.103 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.103 [2024-07-14 19:05:49.307690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:01.103 [2024-07-14 19:05:49.307771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748195 ] 00:34:01.103 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.103 Zero copy mechanism will not be used. 00:34:01.360 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.360 [2024-07-14 19:05:49.373816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.360 [2024-07-14 19:05:49.464396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.360 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.360 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:34:01.360 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.360 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:01.619 19:05:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.186 nvme0n1 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:02.186 19:05:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:02.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:02.445 Zero copy mechanism will not be used. 00:34:02.445 Running I/O for 2 seconds... 00:34:02.445 [2024-07-14 19:05:50.466340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.445 [2024-07-14 19:05:50.466673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.466711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.472834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.473147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.473178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.478557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.478890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.484676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.484978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.485007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.490686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.491023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.497209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.497518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.497562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.503125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.503422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.503451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.509265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.509560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.509588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.515561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.515856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.515893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.521163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.521459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.521487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.527028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.527323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.527351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.533530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.533823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.533852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.539801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.540143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.546086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.546382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.546410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.552600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.552998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.559242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.559522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.559551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.565810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.566167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.566210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.572312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.572608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.572644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.578830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.579146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.579176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.585448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.585734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.585762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.592031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.592352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.592380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.598574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.598900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.598930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.605061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.605380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.605408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.611657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.611973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.612002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.617586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.617948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.617977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.623867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.624185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.624228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.630498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.630838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.630891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.636985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.637293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.642852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.446 [2024-07-14 19:05:50.643195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.446 [2024-07-14 19:05:50.643223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.446 [2024-07-14 19:05:50.649733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.447 [2024-07-14 19:05:50.650060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.447 [2024-07-14 19:05:50.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.447 [2024-07-14 19:05:50.656983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.447 [2024-07-14 19:05:50.657299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.447 [2024-07-14 19:05:50.657327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.447 [2024-07-14 19:05:50.663316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.447 [2024-07-14 19:05:50.663504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.447 [2024-07-14 19:05:50.663532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.447 [2024-07-14 19:05:50.670184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.447 [2024-07-14 19:05:50.670501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.447 [2024-07-14 19:05:50.670528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.677513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.677844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.677895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.684388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.684496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.684522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.690804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.691112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.691141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.696888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.697212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.697254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.702846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.703194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.703221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.708647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.708965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.708993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.714210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.714515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.714543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.719970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.720266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.725909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.726310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.732079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.732374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.732403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.738500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.738815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.738851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.744619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.744947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.750466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.750780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.750807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.756918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.757227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.757255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.763096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.763195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.763220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.770004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.770310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.770337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.775596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.775935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.775964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.781272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.781590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.781618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.787109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.787418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.787445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.792795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.793130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.793172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.798733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.799081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.799109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.804583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.804930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.804957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.810468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.810788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.816504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.816812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.816855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.822443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.822760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.822787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.828795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.829079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.829108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.835353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.835688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.841068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.841372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.841409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.847116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.847402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.847429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.853040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.853360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.853387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.858790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.859104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.859132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.864732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.865063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.865091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.871047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.871372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.877679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.878024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.878053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.883940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.884250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.884277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.889921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.890262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.890289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.708 [2024-07-14 19:05:50.895693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.708 [2024-07-14 19:05:50.896062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.708 [2024-07-14 19:05:50.896091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.709 [2024-07-14 19:05:50.901607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.709 [2024-07-14 19:05:50.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.709 [2024-07-14 19:05:50.901980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.709 [2024-07-14 19:05:50.907748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.709 [2024-07-14 19:05:50.908093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.709 [2024-07-14 19:05:50.908121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.709 [2024-07-14 19:05:50.913699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.709 [2024-07-14 19:05:50.914041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.709 [2024-07-14 19:05:50.914069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.709 [2024-07-14 19:05:50.920284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.709 [2024-07-14 19:05:50.920590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.709 [2024-07-14 19:05:50.920617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.709 [2024-07-14 19:05:50.926728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.709 [2024-07-14 19:05:50.926844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.709 [2024-07-14 19:05:50.926872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.969 [2024-07-14 19:05:50.933060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.969 [2024-07-14 19:05:50.933339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.969 [2024-07-14 19:05:50.933368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.969 [2024-07-14 19:05:50.939713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.969 [2024-07-14 19:05:50.940015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.940043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.946575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.953481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.953851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.953903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.960787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.961192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.961221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.967967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.968382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.968409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.974788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.975122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.975151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.981982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.982306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.982337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.989198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.989573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.989601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:50.996359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:50.996651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:50.996679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.003714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.004038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.004066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.010760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.011099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.011134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.017552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.017981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.018010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.024453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.024765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.024792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.031413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.031787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.031829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.037734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.038034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.038063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.043507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.043814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.043842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.049409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.049712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.049740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.055135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.055412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.055440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.062221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.062562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.062590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.069069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.069371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.069398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.075701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.076000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.076029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.081939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.082232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.082259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.088457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.088764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.095501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.095800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.095845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.102801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.103096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.103125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.109815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.110119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.110148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.116806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.117238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.117266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.123722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.124033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.970 [2024-07-14 19:05:51.124061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.970 [2024-07-14 19:05:51.130766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.970 [2024-07-14 19:05:51.131116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.131144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.136916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.137180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.137208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.142410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.142700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.142728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.148888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.149162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.149191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.155850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.156160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.156201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.162547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.162824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.162852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.168460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.168792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.174135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.174412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.174439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.179635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.179920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.179958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.185433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.185712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.185740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.971 [2024-07-14 19:05:51.191443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:02.971 [2024-07-14 19:05:51.191736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.971 [2024-07-14 19:05:51.191764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.197512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.197804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.197847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.203480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.203770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.203798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.209652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.209958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.209987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.215634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.221731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.222016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.222044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.227705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.227994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.233 [2024-07-14 19:05:51.228022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.233 [2024-07-14 19:05:51.233829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.233 [2024-07-14 19:05:51.234131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.234161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.239893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.240157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.240200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.245590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.245903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.245932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.251472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.251763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.251790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.257416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.257687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.257716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.263353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.263631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.269560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.269842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.269870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.275780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.276349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.276376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.282180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.282486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.282514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.288245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.288530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.288558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.294359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.294645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.294672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.300521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.300811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.300839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.306592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.306904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.306932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.312902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.313167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.313195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.319366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.319657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.319701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.324842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.325132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.325161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.330431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.330711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.330739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.335704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.335992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.336030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.341018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.341283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.341311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.346564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.346887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.346915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.352508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.352821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.352850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.358706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.359009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.359037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.364733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.365059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.370964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.371229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.371257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.377230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.377506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.377533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.383391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.383683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.389529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.389834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.389862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.395573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.395855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.401366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.401658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.234 [2024-07-14 19:05:51.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.234 [2024-07-14 19:05:51.407397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.234 [2024-07-14 19:05:51.407689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.407717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.413407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.413684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.413726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.419400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.419679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.419707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.425293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.425586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.425613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.431636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.431948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.437919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.438212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.438256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.443781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.444082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.444110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.449579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.449869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.449924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.235 [2024-07-14 19:05:51.455115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.235 [2024-07-14 19:05:51.455415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.235 [2024-07-14 19:05:51.455447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.496 [2024-07-14 19:05:51.460702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.496 [2024-07-14 19:05:51.461005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.461034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.466514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.466806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.466835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.472098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.472396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.472427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.478298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.478602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.478633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.484132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.484429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.484462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.490614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.490937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.490966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.497075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.497372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.497403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.503712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.504016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.504045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.510176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.510521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.516622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.516926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.516976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.523051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.523313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.523347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.528714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.528985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.529014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.534042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.534306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.534334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.540151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.540458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.540486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.545624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.545935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.545964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.551121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.551383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.551412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.556555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.556834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.556862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.562015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.562307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.562338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.567480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.567770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.573207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.573502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.573532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.578837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.579132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.579161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.584496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.584784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.584815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.590102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.590401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.595609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.595908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.595955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.601242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.601532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.601562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.606692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.607003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.607032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.612531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.612822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.612853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.618259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.618550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.618581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.623927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.497 [2024-07-14 19:05:51.624197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-07-14 19:05:51.624242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.497 [2024-07-14 19:05:51.629492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.629781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.629812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.635037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.635332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.640707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.641027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.641056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.646892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.647174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.647219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.653034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.653327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.653359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.658773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.659074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.659102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.664446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.664738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.664769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.670125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.670430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.670461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.676323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.676613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.676644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.682773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.683081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.683110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.688467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.688755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.688786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.694307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.694598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.699961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.700249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.700280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.705687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.705994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.706022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.711361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.711656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.711686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.498 [2024-07-14 19:05:51.717304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.498 [2024-07-14 19:05:51.717598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.498 [2024-07-14 19:05:51.717628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.723492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.723784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.723815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.729993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.730283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.730314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.736491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.736784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.736817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.742780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.743077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.749123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.749422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.749452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.755708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.756015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.756043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.762176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.762484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.762516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.768734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.769037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.769066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.774513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.774803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.774834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.780626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.780941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.780970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.787033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.787327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.787358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.793044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.793342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.793373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.799015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.799307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.799338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.805217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.805565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.805596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.812309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.812598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.812629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.818440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.818730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.818761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.824435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.824726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.824757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.830627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.830966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.836498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.836780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.842303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.842585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.842614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.848679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.848983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.849017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.855045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.855338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.758 [2024-07-14 19:05:51.861365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.758 [2024-07-14 19:05:51.861666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.758 [2024-07-14 19:05:51.861698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.867765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.868091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.874412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.874702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.874733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.881025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.881351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.887421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.887712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.887743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.893776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.894079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.894108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.899919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.900185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.900230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.906186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.906518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.906548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.912581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.912883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.912914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.919293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.919589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.919620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.925090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.925390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.925421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.930663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.931002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.936481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.936770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.936801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.942262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.942557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.942587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.948190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.948481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.948511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.954268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.954560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.954590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.960217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.960542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.966074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.966370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.966401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.971565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.971845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.971874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.977276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:03.759 [2024-07-14 19:05:51.977560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.759 [2024-07-14 19:05:51.977589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.759 [2024-07-14 19:05:51.982765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:51.983071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:51.983100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:51.988325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:51.988608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:51.988640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:51.993621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:51.993900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:51.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:51.999792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.000080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.000108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.005944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.006228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.006265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.012281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.012574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.012610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.018779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.019079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.019122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.025358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.025651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.025683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.031841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.032141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.032169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.038393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.038686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.038718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.044853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.045148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.045193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.051250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.019 [2024-07-14 19:05:52.051575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.019 [2024-07-14 19:05:52.057444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.019 [2024-07-14 19:05:52.057736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.057767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.063969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.064259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.064290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.070201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.070494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.070525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.076511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.076800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.076831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.082174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.082512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.087701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.088016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.093204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.093497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.093528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.099335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.099654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.099685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.107009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.107305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.107337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.114658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.115002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.115030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.122674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.123050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.123094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.130601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.130961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.130989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.138776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.139082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.146117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.146416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.146448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.153826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.154255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.162057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.162357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.162388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.170280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.170670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.170701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.177833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.178267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.178299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.185148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.185457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.185494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.191215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.191507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.191538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.197122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.197425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.197456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.202643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.202953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.202981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.209062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.209354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.209386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.215100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.215391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.215422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.221481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.221871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.228668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.229024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.229052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.235714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.236087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.236115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.020 [2024-07-14 19:05:52.242702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.020 [2024-07-14 19:05:52.243098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.020 [2024-07-14 19:05:52.243128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.249817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.250156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.250184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.256800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.257090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.257119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.264040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.264378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.264409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.270970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.271355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.271386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.278469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.278848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.278888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.285750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.286089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.286118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.292832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.293208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.293240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.300320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.300641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.307322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.307612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.307643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.313518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.313951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.313994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.320629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.321024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.321066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.328045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.328335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.328366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.334129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.334434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.334465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.340043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.279 [2024-07-14 19:05:52.340356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.279 [2024-07-14 19:05:52.340387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.279 [2024-07-14 19:05:52.346056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.346349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.346380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.352191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.352505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.352535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.358157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.358520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.358558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.364138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.364452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.364498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.370082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.370373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.370403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.375813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.376099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.382483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.382764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.382794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.388024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.388331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.388362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.393558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.393838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.393867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.399089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.399381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.404635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.404938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.404966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.410264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.410556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.410586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.415733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.416035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.416063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.421354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.421644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.421675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.426889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.427172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.432331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.432625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.437875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.438166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.438212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.443336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.443632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.443664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.449120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.449420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.449451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:04.280 [2024-07-14 19:05:52.455417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f6f80) with pdu=0x2000190fef90 00:34:04.280 [2024-07-14 19:05:52.455711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.280 [2024-07-14 19:05:52.455748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:04.280 00:34:04.280 Latency(us) 00:34:04.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.280 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:04.280 nvme0n1 : 2.00 4962.34 620.29 0.00 0.00 3216.57 2318.03 13107.20 00:34:04.280 =================================================================================================================== 00:34:04.280 Total : 4962.34 620.29 0.00 0.00 3216.57 2318.03 13107.20 00:34:04.280 0 00:34:04.280 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:04.280 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:04.280 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:04.280 | .driver_specific 00:34:04.280 | .nvme_error 00:34:04.280 | .status_code 00:34:04.280 | .command_transient_transport_error' 00:34:04.280 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 320 > 0 )) 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3748195 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3748195 ']' 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3748195 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748195 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748195' 00:34:04.538 killing process with pid 3748195 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3748195 00:34:04.538 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.538 00:34:04.538 Latency(us) 00:34:04.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.538 =================================================================================================================== 00:34:04.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.538 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3748195 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3746804 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3746804 ']' 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3746804 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:04.795 19:05:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746804 00:34:04.795 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:04.795 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:04.795 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746804' 00:34:04.795 killing process with pid 3746804 00:34:04.795 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3746804 00:34:04.795 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3746804 00:34:05.052 00:34:05.052 real 0m15.513s 00:34:05.052 user 0m30.465s 00:34:05.052 sys 0m4.301s 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:05.052 ************************************ 00:34:05.052 END TEST nvmf_digest_error 00:34:05.052 ************************************ 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:05.052 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:05.052 rmmod nvme_tcp 00:34:05.311 rmmod nvme_fabrics 00:34:05.311 rmmod nvme_keyring 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3746804 ']' 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3746804 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3746804 ']' 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3746804 00:34:05.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3746804) - No such process 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3746804 is not found' 00:34:05.311 Process with pid 3746804 is not found 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:05.311 19:05:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.211 19:05:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:07.211 00:34:07.211 real 0m35.305s 00:34:07.211 user 1m1.951s 00:34:07.211 sys 0m10.010s 00:34:07.211 19:05:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.211 19:05:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:07.211 ************************************ 00:34:07.211 END TEST nvmf_digest 00:34:07.211 ************************************ 00:34:07.211 19:05:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:07.211 19:05:55 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:07.211 19:05:55 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:07.211 19:05:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:07.211 19:05:55 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:07.211 19:05:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:07.211 19:05:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:07.211 19:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.211 ************************************ 00:34:07.211 START TEST nvmf_bdevperf 00:34:07.211 ************************************ 00:34:07.211 19:05:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:07.469 * Looking for test storage... 00:34:07.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:07.469 19:05:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:09.372 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.372 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:09.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:09.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:09.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:09.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:34:09.373 00:34:09.373 --- 10.0.0.2 ping statistics --- 00:34:09.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.373 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:34:09.373 00:34:09.373 --- 10.0.0.1 ping statistics --- 00:34:09.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.373 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3750630 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3750630 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3750630 ']' 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:09.373 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.373 [2024-07-14 19:05:57.564682] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:09.373 [2024-07-14 19:05:57.564754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.631 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.631 [2024-07-14 19:05:57.629712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:09.631 [2024-07-14 19:05:57.719350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.632 [2024-07-14 19:05:57.719400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.632 [2024-07-14 19:05:57.719429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.632 [2024-07-14 19:05:57.719442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.632 [2024-07-14 19:05:57.719451] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.632 [2024-07-14 19:05:57.719610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.632 [2024-07-14 19:05:57.719673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.632 [2024-07-14 19:05:57.719675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.632 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.892 [2024-07-14 19:05:57.863045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.892 Malloc0 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.892 [2024-07-14 19:05:57.922318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:09.892 { 00:34:09.892 "params": { 00:34:09.892 "name": "Nvme$subsystem", 00:34:09.892 "trtype": "$TEST_TRANSPORT", 00:34:09.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.892 "adrfam": "ipv4", 00:34:09.892 "trsvcid": "$NVMF_PORT", 00:34:09.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.892 "hdgst": ${hdgst:-false}, 00:34:09.892 "ddgst": ${ddgst:-false} 00:34:09.892 }, 00:34:09.892 "method": "bdev_nvme_attach_controller" 00:34:09.892 } 00:34:09.892 EOF 00:34:09.892 )") 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:09.892 19:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:09.892 "params": { 00:34:09.892 "name": "Nvme1", 00:34:09.892 "trtype": "tcp", 00:34:09.892 "traddr": "10.0.0.2", 00:34:09.892 "adrfam": "ipv4", 00:34:09.892 "trsvcid": "4420", 00:34:09.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:09.892 "hdgst": false, 00:34:09.892 "ddgst": false 00:34:09.892 }, 00:34:09.892 "method": "bdev_nvme_attach_controller" 00:34:09.892 }' 00:34:09.892 [2024-07-14 19:05:57.970385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:09.892 [2024-07-14 19:05:57.970451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750663 ] 00:34:09.892 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.892 [2024-07-14 19:05:58.031015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.151 [2024-07-14 19:05:58.121628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.151 Running I/O for 1 seconds... 00:34:11.528 00:34:11.528 Latency(us) 00:34:11.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.528 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.528 Verification LBA range: start 0x0 length 0x4000 00:34:11.528 Nvme1n1 : 1.01 8479.06 33.12 0.00 0.00 15036.82 3398.16 14951.92 00:34:11.528 =================================================================================================================== 00:34:11.528 Total : 8479.06 33.12 0.00 0.00 15036.82 3398.16 14951.92 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3750826 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:11.528 { 00:34:11.528 "params": { 00:34:11.528 "name": "Nvme$subsystem", 00:34:11.528 "trtype": "$TEST_TRANSPORT", 00:34:11.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.528 "adrfam": "ipv4", 00:34:11.528 "trsvcid": "$NVMF_PORT", 00:34:11.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.528 "hdgst": ${hdgst:-false}, 00:34:11.528 "ddgst": ${ddgst:-false} 00:34:11.528 }, 00:34:11.528 "method": "bdev_nvme_attach_controller" 00:34:11.528 } 00:34:11.528 EOF 00:34:11.528 )") 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:11.528 19:05:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:11.528 "params": { 00:34:11.529 "name": "Nvme1", 00:34:11.529 "trtype": "tcp", 00:34:11.529 "traddr": "10.0.0.2", 00:34:11.529 "adrfam": "ipv4", 00:34:11.529 "trsvcid": "4420", 00:34:11.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.529 "hdgst": false, 00:34:11.529 "ddgst": false 00:34:11.529 }, 00:34:11.529 "method": "bdev_nvme_attach_controller" 00:34:11.529 }' 00:34:11.529 [2024-07-14 19:05:59.618157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:11.529 [2024-07-14 19:05:59.618241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750826 ] 00:34:11.529 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.529 [2024-07-14 19:05:59.683036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.787 [2024-07-14 19:05:59.768235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.045 Running I/O for 15 seconds... 00:34:14.575 19:06:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3750630 00:34:14.575 19:06:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:14.575 [2024-07-14 19:06:02.586336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.575 [2024-07-14 19:06:02.586971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.586986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.575 [2024-07-14 19:06:02.587310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.575 [2024-07-14 19:06:02.587325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.576 [2024-07-14 19:06:02.587721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.587978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.587994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.576 [2024-07-14 19:06:02.588727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.576 [2024-07-14 19:06:02.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.588980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.588994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.577 [2024-07-14 19:06:02.589860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.577 [2024-07-14 19:06:02.589902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.589981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.589996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.590025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.590039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.590054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.590068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.590083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.590096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.590111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.577 [2024-07-14 19:06:02.590124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.577 [2024-07-14 19:06:02.590139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.578 [2024-07-14 19:06:02.590651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea1050 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.590683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.578 [2024-07-14 19:06:02.590696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.578 [2024-07-14 19:06:02.590709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39464 len:8 PRP1 0x0 PRP2 0x0 00:34:14.578 [2024-07-14 19:06:02.590723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590788] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea1050 was disconnected and freed. reset controller. 00:34:14.578 [2024-07-14 19:06:02.590866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.578 [2024-07-14 19:06:02.590902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.578 [2024-07-14 19:06:02.590950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.578 [2024-07-14 19:06:02.590976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.590989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.578 [2024-07-14 19:06:02.591001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.578 [2024-07-14 19:06:02.591013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.594629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.578 [2024-07-14 19:06:02.594668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.595602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.578 [2024-07-14 19:06:02.595635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.578 [2024-07-14 19:06:02.595652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.595900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.596136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.578 [2024-07-14 19:06:02.596183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.578 [2024-07-14 19:06:02.596201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.578 [2024-07-14 19:06:02.599787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.578 [2024-07-14 19:06:02.609124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.578 [2024-07-14 19:06:02.609556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.578 [2024-07-14 19:06:02.609609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.578 [2024-07-14 19:06:02.609627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.609864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.610115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.578 [2024-07-14 19:06:02.610138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.578 [2024-07-14 19:06:02.610154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.578 [2024-07-14 19:06:02.613731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.578 [2024-07-14 19:06:02.623037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.578 [2024-07-14 19:06:02.623439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.578 [2024-07-14 19:06:02.623476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.578 [2024-07-14 19:06:02.623494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.623731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.623987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.578 [2024-07-14 19:06:02.624011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.578 [2024-07-14 19:06:02.624026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.578 [2024-07-14 19:06:02.627619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.578 [2024-07-14 19:06:02.636943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.578 [2024-07-14 19:06:02.637367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.578 [2024-07-14 19:06:02.637395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.578 [2024-07-14 19:06:02.637410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.637682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.637938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.578 [2024-07-14 19:06:02.637963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.578 [2024-07-14 19:06:02.637977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.578 [2024-07-14 19:06:02.641550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.578 [2024-07-14 19:06:02.650839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.578 [2024-07-14 19:06:02.651258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.578 [2024-07-14 19:06:02.651289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.578 [2024-07-14 19:06:02.651306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.578 [2024-07-14 19:06:02.651543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.578 [2024-07-14 19:06:02.651784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.578 [2024-07-14 19:06:02.651807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.578 [2024-07-14 19:06:02.651822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.655406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.664691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.665052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.665083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.665101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.665338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.665585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.665608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.665623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.669222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.678734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.679124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.679156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.679173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.679411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.679651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.679674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.679689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.683274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.692771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.693163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.693195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.693212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.693449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.693690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.693713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.693728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.697312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.706865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.707281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.707311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.707328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.707564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.707806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.707829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.707843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.711430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.720723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.721148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.721191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.721206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.721461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.721664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.721684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.721696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.725244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.734751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.735183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.735227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.735243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.735510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.735751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.735774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.735788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.739373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.748665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.749073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.749104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.749121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.749358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.749600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.749622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.749637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.753224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.762514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.762865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.762903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.762927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.579 [2024-07-14 19:06:02.763165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.579 [2024-07-14 19:06:02.763407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.579 [2024-07-14 19:06:02.763430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.579 [2024-07-14 19:06:02.763445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.579 [2024-07-14 19:06:02.767033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.579 [2024-07-14 19:06:02.776374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.579 [2024-07-14 19:06:02.776756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.579 [2024-07-14 19:06:02.776787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.579 [2024-07-14 19:06:02.776805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.580 [2024-07-14 19:06:02.777063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.580 [2024-07-14 19:06:02.777305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.580 [2024-07-14 19:06:02.777328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.580 [2024-07-14 19:06:02.777343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.580 [2024-07-14 19:06:02.780928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.580 [2024-07-14 19:06:02.790220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.580 [2024-07-14 19:06:02.790709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.580 [2024-07-14 19:06:02.790760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.580 [2024-07-14 19:06:02.790777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.580 [2024-07-14 19:06:02.791026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.580 [2024-07-14 19:06:02.791268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.580 [2024-07-14 19:06:02.791291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.580 [2024-07-14 19:06:02.791305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.580 [2024-07-14 19:06:02.794886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.839 [2024-07-14 19:06:02.804184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.839 [2024-07-14 19:06:02.804578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-14 19:06:02.804609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.839 [2024-07-14 19:06:02.804626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.839 [2024-07-14 19:06:02.804863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.839 [2024-07-14 19:06:02.805115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.839 [2024-07-14 19:06:02.805144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.839 [2024-07-14 19:06:02.805159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.839 [2024-07-14 19:06:02.808741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.839 [2024-07-14 19:06:02.818043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.839 [2024-07-14 19:06:02.818499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-14 19:06:02.818526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.839 [2024-07-14 19:06:02.818541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.839 [2024-07-14 19:06:02.818759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.839 [2024-07-14 19:06:02.818973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.839 [2024-07-14 19:06:02.818993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.839 [2024-07-14 19:06:02.819005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.839 [2024-07-14 19:06:02.822535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.839 [2024-07-14 19:06:02.832061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.839 [2024-07-14 19:06:02.832447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-14 19:06:02.832477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.839 [2024-07-14 19:06:02.832495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.839 [2024-07-14 19:06:02.832731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.839 [2024-07-14 19:06:02.832986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.839 [2024-07-14 19:06:02.833010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.839 [2024-07-14 19:06:02.833025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.839 [2024-07-14 19:06:02.836597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.839 [2024-07-14 19:06:02.846105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.839 [2024-07-14 19:06:02.846513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-14 19:06:02.846544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.839 [2024-07-14 19:06:02.846561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.839 [2024-07-14 19:06:02.846798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.839 [2024-07-14 19:06:02.847049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.839 [2024-07-14 19:06:02.847073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.839 [2024-07-14 19:06:02.847088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.839 [2024-07-14 19:06:02.850663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.839 [2024-07-14 19:06:02.859968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.839 [2024-07-14 19:06:02.860384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.839 [2024-07-14 19:06:02.860411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.839 [2024-07-14 19:06:02.860426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.860647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.860850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.860869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.860890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.864416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.873948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.874346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.874377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.874395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.874632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.874873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.874906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.874929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.878544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.887836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.888231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.888261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.888279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.888516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.888757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.888780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.888794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.892376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.901862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.902258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.902298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.902504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.902707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.902727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.902739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.906302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.915795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.916221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.916249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.916264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.916503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.916758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.916781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.916796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.920374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.929678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.930094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.930123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.930139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.930397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.930640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.930664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.930678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.934261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.943543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.943946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.943979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.943994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.944240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.944444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.944464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.944481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.948018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.957514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.957911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.957942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.957959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.958196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.958438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.958461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.958477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.962060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.971565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.971977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.972007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.972023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.972261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.972502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.972525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.972540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.976132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.985415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.985788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.985819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.985836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:02.986081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:02.986324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:02.986347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:02.986361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:02.989940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:02.999431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:02.999828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.840 [2024-07-14 19:06:02.999864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.840 [2024-07-14 19:06:02.999891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.840 [2024-07-14 19:06:03.000130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.840 [2024-07-14 19:06:03.000372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.840 [2024-07-14 19:06:03.000395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.840 [2024-07-14 19:06:03.000410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.840 [2024-07-14 19:06:03.003987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.840 [2024-07-14 19:06:03.013270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.840 [2024-07-14 19:06:03.013654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-14 19:06:03.013685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.841 [2024-07-14 19:06:03.013702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.841 [2024-07-14 19:06:03.013949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.841 [2024-07-14 19:06:03.014191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.841 [2024-07-14 19:06:03.014214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.841 [2024-07-14 19:06:03.014229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.841 [2024-07-14 19:06:03.017801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.841 [2024-07-14 19:06:03.027326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.841 [2024-07-14 19:06:03.027712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-14 19:06:03.027739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.841 [2024-07-14 19:06:03.027754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.841 [2024-07-14 19:06:03.028006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.841 [2024-07-14 19:06:03.028227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.841 [2024-07-14 19:06:03.028247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.841 [2024-07-14 19:06:03.028259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.841 [2024-07-14 19:06:03.031789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.841 [2024-07-14 19:06:03.041283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.841 [2024-07-14 19:06:03.041780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-14 19:06:03.041831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.841 [2024-07-14 19:06:03.041848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.841 [2024-07-14 19:06:03.042092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.841 [2024-07-14 19:06:03.042340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.841 [2024-07-14 19:06:03.042363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.841 [2024-07-14 19:06:03.042378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.841 [2024-07-14 19:06:03.045959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.841 [2024-07-14 19:06:03.055235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.841 [2024-07-14 19:06:03.055621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.841 [2024-07-14 19:06:03.055648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:14.841 [2024-07-14 19:06:03.055663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:14.841 [2024-07-14 19:06:03.055890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:14.841 [2024-07-14 19:06:03.056094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.841 [2024-07-14 19:06:03.056114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.841 [2024-07-14 19:06:03.056126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.841 [2024-07-14 19:06:03.059649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.069158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.069580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.069607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.069622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.069874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.070129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.070149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.070177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.073700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.082997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.083405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.083435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.083452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.083689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.083941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.083965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.083980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.087554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.096937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.097311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.097342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.097360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.097597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.097838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.097861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.097886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.101459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.110949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.111332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.111359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.111374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.111596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.111801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.111820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.111832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.115362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.124852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.125279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.125312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.125344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.125596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.125837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.125860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.125874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.129461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.138728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.139227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.139278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.139300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.139538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.139779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.139802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.139817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.143396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.152666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.153045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.153076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.153093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.153329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.153571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.153594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.153608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.157184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.166662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.167154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.101 [2024-07-14 19:06:03.167206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.101 [2024-07-14 19:06:03.167223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.101 [2024-07-14 19:06:03.167460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.101 [2024-07-14 19:06:03.167701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.101 [2024-07-14 19:06:03.167724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.101 [2024-07-14 19:06:03.167738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.101 [2024-07-14 19:06:03.171329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.101 [2024-07-14 19:06:03.180611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.101 [2024-07-14 19:06:03.180976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.181003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.181017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.181217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.181421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.181447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.181460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.184990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.194475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.194911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.194938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.194953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.195210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.195452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.195475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.195490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.199069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.208336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.208733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.208763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.208780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.209027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.209269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.209292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.209306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.212881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.222374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.222753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.222784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.222801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.223049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.223291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.223314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.223328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.226919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.236211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.236638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.236665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.236681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.236945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.237188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.237211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.237226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.240793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.250070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.250472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.250499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.250514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.250733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.250947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.250967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.250980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.254502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.263988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.264368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.264398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.264414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.264652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.264906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.264930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.264945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.268527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.278028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.278413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.278440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.278455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.278682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.278896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.278917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.278929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.282451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.291934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.292319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.292350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.292367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.292604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.292845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.292868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.292894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.296466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.305949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.306351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.306381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.306397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.306633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.306885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.306908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.306923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.310492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.102 [2024-07-14 19:06:03.319980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.102 [2024-07-14 19:06:03.320362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.102 [2024-07-14 19:06:03.320393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.102 [2024-07-14 19:06:03.320410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.102 [2024-07-14 19:06:03.320648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.102 [2024-07-14 19:06:03.320900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.102 [2024-07-14 19:06:03.320924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.102 [2024-07-14 19:06:03.320944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.102 [2024-07-14 19:06:03.324516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.333808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.334223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.334254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.334271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.334508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.334749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.334772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.334787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.338365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.347846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.348226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.348257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.348274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.348511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.348754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.348777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.348791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.352371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.361849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.362259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.362290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.362307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.362543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.362785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.362807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.362821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.366402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.375710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.376073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.376109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.376127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.376364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.376606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.376628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.376643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.380220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.389722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.390102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.390139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.390155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.390392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.390633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.390656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.390671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.394261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.403743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.404110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.404140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.404157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.404394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.404635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.404658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.404673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.408253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.417732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.418135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.418166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.418183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.418420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.418667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.418690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.418705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.422283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.431575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.431975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.432006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.432023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.432260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.432502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.432524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.432539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.436120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.445608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.446013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.446044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.446061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.446298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.446539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.446562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.363 [2024-07-14 19:06:03.446576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.363 [2024-07-14 19:06:03.450157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.363 [2024-07-14 19:06:03.459640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.363 [2024-07-14 19:06:03.460044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.363 [2024-07-14 19:06:03.460075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.363 [2024-07-14 19:06:03.460092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.363 [2024-07-14 19:06:03.460330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.363 [2024-07-14 19:06:03.460571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.363 [2024-07-14 19:06:03.460594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.460609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.464193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.473481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.473855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.473893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.473913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.474151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.474392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.474415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.474430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.478020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.487512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.487890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.487921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.487938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.488176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.488418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.488441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.488455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.492035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.501515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.501870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.501910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.501928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.502164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.502406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.502429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.502443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.506020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.515515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.515890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.515920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.515943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.516181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.516422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.516445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.516460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.520040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.529541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.529927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.529959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.529977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.530214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.530468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.530494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.530508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.534091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.543578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.543943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.543974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.543991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.544230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.544472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.544495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.544509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.548084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.557571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.557970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.558002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.558020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.558258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.558499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.558528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.558544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.562121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.571623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.572049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.572080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.572097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.572334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.572576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.572598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.572613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.364 [2024-07-14 19:06:03.576198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.364 [2024-07-14 19:06:03.585480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.364 [2024-07-14 19:06:03.585890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.364 [2024-07-14 19:06:03.585928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.364 [2024-07-14 19:06:03.585944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.364 [2024-07-14 19:06:03.586181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.364 [2024-07-14 19:06:03.586423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.364 [2024-07-14 19:06:03.586446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.364 [2024-07-14 19:06:03.586460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.625 [2024-07-14 19:06:03.590039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.625 [2024-07-14 19:06:03.599330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.625 [2024-07-14 19:06:03.599724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.625 [2024-07-14 19:06:03.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.625 [2024-07-14 19:06:03.599771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.625 [2024-07-14 19:06:03.600022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.625 [2024-07-14 19:06:03.600264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.625 [2024-07-14 19:06:03.600287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.625 [2024-07-14 19:06:03.600302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.625 [2024-07-14 19:06:03.603875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.625 [2024-07-14 19:06:03.613427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.625 [2024-07-14 19:06:03.613812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.625 [2024-07-14 19:06:03.613844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.625 [2024-07-14 19:06:03.613862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.614110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.614354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.614377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.614392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.617974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.627490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.627863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.627905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.627924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.628162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.628403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.628426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.628441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.632028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.641528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.641921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.641951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.641969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.642206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.642448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.642471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.642486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.646064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.655555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.655957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.655988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.656006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.656250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.656492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.656515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.656530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.660112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.669414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.669821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.669852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.669869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.670116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.670358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.670380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.670395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.673975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.683266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.683680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.683711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.683728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.683980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.684223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.684246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.684260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.687828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.697119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.697493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.697523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.697540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.697777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.698031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.698055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.698076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.701644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.711134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.711503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.711534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.711551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.711788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.712041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.712065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.712080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.715648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.725139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.725485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.725515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.725532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.725769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.726024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.726047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.726062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.729647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.739136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.739513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.739544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.739561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.739798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.740053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.740077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.740092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.743658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.753170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.753570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.626 [2024-07-14 19:06:03.753605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.626 [2024-07-14 19:06:03.753624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.626 [2024-07-14 19:06:03.753861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.626 [2024-07-14 19:06:03.754114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.626 [2024-07-14 19:06:03.754137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.626 [2024-07-14 19:06:03.754152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.626 [2024-07-14 19:06:03.757727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.626 [2024-07-14 19:06:03.767004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.626 [2024-07-14 19:06:03.767407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.767437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.767454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.767690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.767944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.767968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.767983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.771569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.627 [2024-07-14 19:06:03.780870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.627 [2024-07-14 19:06:03.781276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.781307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.781324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.781561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.781803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.781826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.781840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.785427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.627 [2024-07-14 19:06:03.794930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.627 [2024-07-14 19:06:03.795349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.795391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.795407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.795645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.795902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.795930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.795945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.799512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.627 [2024-07-14 19:06:03.808789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.627 [2024-07-14 19:06:03.809178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.809209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.809226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.809463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.809704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.809727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.809741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.813323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.627 [2024-07-14 19:06:03.822810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.627 [2024-07-14 19:06:03.823194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.823225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.823242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.823480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.823722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.823745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.823760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.827357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.627 [2024-07-14 19:06:03.836663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.627 [2024-07-14 19:06:03.837053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.627 [2024-07-14 19:06:03.837085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.627 [2024-07-14 19:06:03.837103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.627 [2024-07-14 19:06:03.837340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.627 [2024-07-14 19:06:03.837581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.627 [2024-07-14 19:06:03.837604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.627 [2024-07-14 19:06:03.837618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.627 [2024-07-14 19:06:03.841206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.850695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.851095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.851126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.851143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.851381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.851623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.851646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.851660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.855240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.864724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.865140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.865171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.865188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.865425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.865667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.865690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.865704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.869297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.878586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.878973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.879004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.879021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.879258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.879500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.879523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.879538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.883117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.892600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.893006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.893037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.893067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.893305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.893547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.893570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.893585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.897167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.906437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.906836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.906866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.906894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.907133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.907375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.907398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.907412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.910990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.920473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.886 [2024-07-14 19:06:03.920871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.886 [2024-07-14 19:06:03.920930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.886 [2024-07-14 19:06:03.920950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.886 [2024-07-14 19:06:03.921189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.886 [2024-07-14 19:06:03.921431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.886 [2024-07-14 19:06:03.921454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.886 [2024-07-14 19:06:03.921468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.886 [2024-07-14 19:06:03.925050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.886 [2024-07-14 19:06:03.934348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:03.934754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:03.934785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:03.934802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:03.935050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:03.935292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:03.935321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:03.935336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:03.938915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:03.948185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:03.948586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:03.948616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:03.948633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:03.948871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:03.949124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:03.949147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:03.949161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:03.952730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:03.962223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:03.962608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:03.962638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:03.962655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:03.962903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:03.963145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:03.963168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:03.963182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:03.966748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:03.976251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:03.976638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:03.976668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:03.976685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:03.976937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:03.977179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:03.977201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:03.977216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:03.980787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:03.990278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:03.990695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:03.990725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:03.990742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:03.990992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:03.991235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:03.991257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:03.991272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:03.994843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.004127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.004521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.004551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.004567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.004804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.005058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.005082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.005097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.008668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.018163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.018531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.018561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.018578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.018815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.019067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.019091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.019106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.022675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.032184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.032581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.032611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.032628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.032872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.033126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.033149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.033164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.036731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.046223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.046599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.046629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.046646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.046893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.047135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.047158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.047172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.050840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.060137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.060550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.060581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.060598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.060835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.061086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.061110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.061125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.064699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.887 [2024-07-14 19:06:04.074018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.887 [2024-07-14 19:06:04.074417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.887 [2024-07-14 19:06:04.074448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.887 [2024-07-14 19:06:04.074465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.887 [2024-07-14 19:06:04.074702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.887 [2024-07-14 19:06:04.074954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.887 [2024-07-14 19:06:04.074978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.887 [2024-07-14 19:06:04.074999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.887 [2024-07-14 19:06:04.078597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.888 [2024-07-14 19:06:04.087894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.888 [2024-07-14 19:06:04.088265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.888 [2024-07-14 19:06:04.088297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.888 [2024-07-14 19:06:04.088315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.888 [2024-07-14 19:06:04.088552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.888 [2024-07-14 19:06:04.088794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.888 [2024-07-14 19:06:04.088818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.888 [2024-07-14 19:06:04.088833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.888 [2024-07-14 19:06:04.092413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.888 [2024-07-14 19:06:04.101945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.888 [2024-07-14 19:06:04.102306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.888 [2024-07-14 19:06:04.102338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:15.888 [2024-07-14 19:06:04.102355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:15.888 [2024-07-14 19:06:04.102594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:15.888 [2024-07-14 19:06:04.102836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.888 [2024-07-14 19:06:04.102859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.888 [2024-07-14 19:06:04.102873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.888 [2024-07-14 19:06:04.106465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.115971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.116318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.116349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.116366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.116604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.116845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.116868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.116893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.120463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.129982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.130396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.130433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.130450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.130702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.130956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.130981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.130995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.134567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.143851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.144261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.144293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.144310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.144547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.144789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.144812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.144826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.148406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.157693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.158074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.158105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.158123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.158361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.158602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.158625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.158640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.162218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.171734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.172159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.172190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.172208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.172444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.172691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.172714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.172729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.176310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.185609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.186023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.186053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.186070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.186307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.186548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.186571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.148 [2024-07-14 19:06:04.186585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.148 [2024-07-14 19:06:04.190164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.148 [2024-07-14 19:06:04.199652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.148 [2024-07-14 19:06:04.200062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.148 [2024-07-14 19:06:04.200092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.148 [2024-07-14 19:06:04.200109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.148 [2024-07-14 19:06:04.200345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.148 [2024-07-14 19:06:04.200587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.148 [2024-07-14 19:06:04.200610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.200624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.204203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.213691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.214050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.214081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.214098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.214334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.214576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.214598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.214613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.218199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.227704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.228111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.228142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.228160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.228396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.228638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.228661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.228676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.232258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.241755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.242146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.242177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.242194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.242432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.242673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.242696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.242711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.246285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.255778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.256169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.256200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.256217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.256454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.256696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.256719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.256734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.260318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.269817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.270187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.270217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.270241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.270479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.270720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.270742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.270757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.274336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.283830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.284185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.284215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.284232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.284469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.284710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.284732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.284747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.288330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.297813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.298216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.298246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.298263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.298500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.298741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.298764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.298778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.302361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.311666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.312052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.312082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.312099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.312336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.312577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.312606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.312621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.316199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.325684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.326070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.326100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.326117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.326354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.326595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.326618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.326632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.330230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.339713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.340125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.340156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.340173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.340410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.340651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.149 [2024-07-14 19:06:04.340673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.149 [2024-07-14 19:06:04.340688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.149 [2024-07-14 19:06:04.344268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.149 [2024-07-14 19:06:04.353750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.149 [2024-07-14 19:06:04.354106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.149 [2024-07-14 19:06:04.354136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.149 [2024-07-14 19:06:04.354153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.149 [2024-07-14 19:06:04.354390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.149 [2024-07-14 19:06:04.354631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.150 [2024-07-14 19:06:04.354654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.150 [2024-07-14 19:06:04.354669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.150 [2024-07-14 19:06:04.358249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.150 [2024-07-14 19:06:04.367738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.150 [2024-07-14 19:06:04.368144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.150 [2024-07-14 19:06:04.368175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.150 [2024-07-14 19:06:04.368192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.150 [2024-07-14 19:06:04.368428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.150 [2024-07-14 19:06:04.368670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.150 [2024-07-14 19:06:04.368693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.150 [2024-07-14 19:06:04.368707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.441 [2024-07-14 19:06:04.372305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.441 [2024-07-14 19:06:04.381716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.441 [2024-07-14 19:06:04.382129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.441 [2024-07-14 19:06:04.382160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.441 [2024-07-14 19:06:04.382178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.441 [2024-07-14 19:06:04.382415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.441 [2024-07-14 19:06:04.382656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.441 [2024-07-14 19:06:04.382679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.441 [2024-07-14 19:06:04.382694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.441 [2024-07-14 19:06:04.386276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.441 [2024-07-14 19:06:04.395562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.441 [2024-07-14 19:06:04.395905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.441 [2024-07-14 19:06:04.395936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.441 [2024-07-14 19:06:04.395954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.441 [2024-07-14 19:06:04.396191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.441 [2024-07-14 19:06:04.396432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.441 [2024-07-14 19:06:04.396455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.441 [2024-07-14 19:06:04.396470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.441 [2024-07-14 19:06:04.400049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.441 [2024-07-14 19:06:04.409540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.441 [2024-07-14 19:06:04.409917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.441 [2024-07-14 19:06:04.409949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.441 [2024-07-14 19:06:04.409966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.441 [2024-07-14 19:06:04.410209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.441 [2024-07-14 19:06:04.410451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.441 [2024-07-14 19:06:04.410474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.441 [2024-07-14 19:06:04.410488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.441 [2024-07-14 19:06:04.414067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.441 [2024-07-14 19:06:04.423552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.441 [2024-07-14 19:06:04.423958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.441 [2024-07-14 19:06:04.423990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.441 [2024-07-14 19:06:04.424007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.441 [2024-07-14 19:06:04.424244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.441 [2024-07-14 19:06:04.424486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.424509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.424523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.428116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.437419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.437800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.437831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.437848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.438097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.438339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.438362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.438376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.441954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.451262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.451666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.451697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.451714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.451963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.452204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.452227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.452247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.455820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.465095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.465467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.465497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.465514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.465751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.466004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.466028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.466043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.469624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.479135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.479538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.479569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.479585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.479823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.480076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.480100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.480115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.483687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.492983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.493380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.493411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.493428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.493665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.493918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.493942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.493957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.497560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.506850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.507253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.507289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.507307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.507544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.507786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.507808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.507824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.511410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.520697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.521086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.521117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.521134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.521371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.521613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.521636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.521650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.525229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.534736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.535157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.535188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.535205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.535443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.535684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.535707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.535721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.539306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.548589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.548991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.549022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.549039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.549277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.549524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.549547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.549562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.553141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.562628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.563034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.442 [2024-07-14 19:06:04.563064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.442 [2024-07-14 19:06:04.563082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.442 [2024-07-14 19:06:04.563318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.442 [2024-07-14 19:06:04.563560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.442 [2024-07-14 19:06:04.563583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.442 [2024-07-14 19:06:04.563598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.442 [2024-07-14 19:06:04.567181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.442 [2024-07-14 19:06:04.576472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.442 [2024-07-14 19:06:04.576822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.576852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.576870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.577117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.577359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.577382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.577406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.443 [2024-07-14 19:06:04.580993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.443 [2024-07-14 19:06:04.590475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.443 [2024-07-14 19:06:04.590871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.590909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.590926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.591164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.591406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.591428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.591443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.443 [2024-07-14 19:06:04.595031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.443 [2024-07-14 19:06:04.604313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.443 [2024-07-14 19:06:04.604664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.604695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.604712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.604960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.605203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.605226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.605240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.443 [2024-07-14 19:06:04.608810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.443 [2024-07-14 19:06:04.618305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.443 [2024-07-14 19:06:04.618671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.618701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.618718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.618966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.619208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.619231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.619246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.443 [2024-07-14 19:06:04.622816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.443 [2024-07-14 19:06:04.632159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.443 [2024-07-14 19:06:04.632538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.632568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.632585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.632823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.633075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.633099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.633114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.443 [2024-07-14 19:06:04.636688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.443 [2024-07-14 19:06:04.646173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.443 [2024-07-14 19:06:04.646558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.443 [2024-07-14 19:06:04.646588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.443 [2024-07-14 19:06:04.646612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.443 [2024-07-14 19:06:04.646851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.443 [2024-07-14 19:06:04.647101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.443 [2024-07-14 19:06:04.647126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.443 [2024-07-14 19:06:04.647140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.704 [2024-07-14 19:06:04.650715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.704 [2024-07-14 19:06:04.660221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.704 [2024-07-14 19:06:04.660594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 19:06:04.660625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.704 [2024-07-14 19:06:04.660642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.704 [2024-07-14 19:06:04.660891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.704 [2024-07-14 19:06:04.661141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.704 [2024-07-14 19:06:04.661165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.704 [2024-07-14 19:06:04.661179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.704 [2024-07-14 19:06:04.664754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.704 [2024-07-14 19:06:04.674067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.704 [2024-07-14 19:06:04.674481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 19:06:04.674512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.704 [2024-07-14 19:06:04.674529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.704 [2024-07-14 19:06:04.674767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.704 [2024-07-14 19:06:04.675022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.704 [2024-07-14 19:06:04.675046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.704 [2024-07-14 19:06:04.675061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.704 [2024-07-14 19:06:04.678650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.704 [2024-07-14 19:06:04.687953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.704 [2024-07-14 19:06:04.688349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 19:06:04.688380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.704 [2024-07-14 19:06:04.688396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.704 [2024-07-14 19:06:04.688633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.704 [2024-07-14 19:06:04.688887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.704 [2024-07-14 19:06:04.688916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.704 [2024-07-14 19:06:04.688932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.704 [2024-07-14 19:06:04.692505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.704 [2024-07-14 19:06:04.701785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.704 [2024-07-14 19:06:04.702168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 19:06:04.702199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.704 [2024-07-14 19:06:04.702216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.704 [2024-07-14 19:06:04.702453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.704 [2024-07-14 19:06:04.702695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.704 [2024-07-14 19:06:04.702718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.704 [2024-07-14 19:06:04.702732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.704 [2024-07-14 19:06:04.706328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.704 [2024-07-14 19:06:04.715832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.704 [2024-07-14 19:06:04.716239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.704 [2024-07-14 19:06:04.716269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.704 [2024-07-14 19:06:04.716286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.704 [2024-07-14 19:06:04.716523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.716765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.716789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.716803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.720391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.729699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.730106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.730137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.730154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.730392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.730634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.730657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.730671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.734275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.743562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.743943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.743974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.743992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.744229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.744471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.744494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.744509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.748096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.757412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.757859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.757898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.757922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.758160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.758401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.758423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.758438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.762023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.771328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.771677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.771708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.771725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.771974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.772216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.772239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.772254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.775823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.785338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.785738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.785769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.785786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.786041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.786284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.786307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.786322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.789896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.799179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.799602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.799633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.799651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.799898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.800140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.800163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.800178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.803753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.813050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.813450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.813480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.813497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.813734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.813985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.814008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.814023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.817590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.827091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.827466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.827496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.827514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.827751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.828020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.828045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.828065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.831646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.840952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.841353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.841383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.841400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.841638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.841892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.841915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.841930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.845506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.854801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.855208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.855239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.855257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.705 [2024-07-14 19:06:04.855495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.705 [2024-07-14 19:06:04.855736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.705 [2024-07-14 19:06:04.855759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.705 [2024-07-14 19:06:04.855774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.705 [2024-07-14 19:06:04.859365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.705 [2024-07-14 19:06:04.868680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.705 [2024-07-14 19:06:04.869090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.705 [2024-07-14 19:06:04.869121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.705 [2024-07-14 19:06:04.869138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.706 [2024-07-14 19:06:04.869384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.706 [2024-07-14 19:06:04.869627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.706 [2024-07-14 19:06:04.869650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.706 [2024-07-14 19:06:04.869666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.706 [2024-07-14 19:06:04.873258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.706 [2024-07-14 19:06:04.882557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.706 [2024-07-14 19:06:04.882933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 19:06:04.882970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.706 [2024-07-14 19:06:04.882989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.706 [2024-07-14 19:06:04.883227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.706 [2024-07-14 19:06:04.883469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.706 [2024-07-14 19:06:04.883492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.706 [2024-07-14 19:06:04.883506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.706 [2024-07-14 19:06:04.887090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.706 [2024-07-14 19:06:04.896584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.706 [2024-07-14 19:06:04.896958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 19:06:04.896990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.706 [2024-07-14 19:06:04.897008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.706 [2024-07-14 19:06:04.897247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.706 [2024-07-14 19:06:04.897488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.706 [2024-07-14 19:06:04.897511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.706 [2024-07-14 19:06:04.897526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.706 [2024-07-14 19:06:04.901108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.706 [2024-07-14 19:06:04.910606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.706 [2024-07-14 19:06:04.910997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 19:06:04.911028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.706 [2024-07-14 19:06:04.911046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.706 [2024-07-14 19:06:04.911283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.706 [2024-07-14 19:06:04.911525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.706 [2024-07-14 19:06:04.911548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.706 [2024-07-14 19:06:04.911563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.706 [2024-07-14 19:06:04.915150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.706 [2024-07-14 19:06:04.924649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.706 [2024-07-14 19:06:04.925042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.706 [2024-07-14 19:06:04.925073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.706 [2024-07-14 19:06:04.925090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.706 [2024-07-14 19:06:04.925327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.706 [2024-07-14 19:06:04.925575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.706 [2024-07-14 19:06:04.925599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.706 [2024-07-14 19:06:04.925613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.929231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:04.938537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:04.938936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:04.938967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:04.938984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:04.939222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:04.939464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.967 [2024-07-14 19:06:04.939486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.967 [2024-07-14 19:06:04.939501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.943090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:04.952385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:04.952785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:04.952816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:04.952833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:04.953084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:04.953326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.967 [2024-07-14 19:06:04.953349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.967 [2024-07-14 19:06:04.953364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.956950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:04.966268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:04.966650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:04.966681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:04.966698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:04.966946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:04.967189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.967 [2024-07-14 19:06:04.967211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.967 [2024-07-14 19:06:04.967226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.970822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:04.980141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:04.980519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:04.980550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:04.980567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:04.980805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:04.981059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.967 [2024-07-14 19:06:04.981083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.967 [2024-07-14 19:06:04.981097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.984673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:04.994187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:04.994557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:04.994588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:04.994605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:04.994842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:04.995095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.967 [2024-07-14 19:06:04.995119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.967 [2024-07-14 19:06:04.995134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.967 [2024-07-14 19:06:04.998709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.967 [2024-07-14 19:06:05.008215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.967 [2024-07-14 19:06:05.008609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.967 [2024-07-14 19:06:05.008640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.967 [2024-07-14 19:06:05.008657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.967 [2024-07-14 19:06:05.008907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.967 [2024-07-14 19:06:05.009149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.009172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.009186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.012759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.022059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.022458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.022489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.022511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.022750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.023005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.023029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.023044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.026619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.035946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.036342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.036390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.036627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.036869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.036904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.036920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.040496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.049785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.050166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.050197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.050214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.050450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.050692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.050716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.050730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.054317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.063821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.064202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.064232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.064249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.064486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.064727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.064756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.064771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.068367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.077773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.078160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.078192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.078209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.078447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.078689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.078712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.078726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.082314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.091815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.092195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.092226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.092243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.092480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.092722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.092745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.092759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.096348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.105853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.106238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.106269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.106286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.106523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.106764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.106787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.106802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.110386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.119894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.120293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.120323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.120340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.120577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.120818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.120841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.120855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.124443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.133755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.134171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.134202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.134219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.134457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.134698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.134720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.134735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.138320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.147609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.148073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.148104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.148122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.968 [2024-07-14 19:06:05.148359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.968 [2024-07-14 19:06:05.148601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.968 [2024-07-14 19:06:05.148623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.968 [2024-07-14 19:06:05.148638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.968 [2024-07-14 19:06:05.152226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.968 [2024-07-14 19:06:05.161513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.968 [2024-07-14 19:06:05.161911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.968 [2024-07-14 19:06:05.161942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.968 [2024-07-14 19:06:05.161959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.969 [2024-07-14 19:06:05.162202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.969 [2024-07-14 19:06:05.162444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.969 [2024-07-14 19:06:05.162467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.969 [2024-07-14 19:06:05.162482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.969 [2024-07-14 19:06:05.166067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.969 [2024-07-14 19:06:05.175374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.969 [2024-07-14 19:06:05.175783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.969 [2024-07-14 19:06:05.175815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.969 [2024-07-14 19:06:05.175832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.969 [2024-07-14 19:06:05.176082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.969 [2024-07-14 19:06:05.176325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.969 [2024-07-14 19:06:05.176349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.969 [2024-07-14 19:06:05.176364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.969 [2024-07-14 19:06:05.179960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.969 [2024-07-14 19:06:05.189267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.969 [2024-07-14 19:06:05.189639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.969 [2024-07-14 19:06:05.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:16.969 [2024-07-14 19:06:05.189687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:16.969 [2024-07-14 19:06:05.189939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:16.969 [2024-07-14 19:06:05.190181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.969 [2024-07-14 19:06:05.190203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.969 [2024-07-14 19:06:05.190218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.193794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.203328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.203736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.203766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.203783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.204035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.204277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.230 [2024-07-14 19:06:05.204300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.230 [2024-07-14 19:06:05.204321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.207906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.217199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.217594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.217625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.217642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.217889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.218132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.230 [2024-07-14 19:06:05.218155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.230 [2024-07-14 19:06:05.218170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.221747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.231072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.231477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.231525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.231762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.232015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.230 [2024-07-14 19:06:05.232040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.230 [2024-07-14 19:06:05.232055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.235634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.244935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.245329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.245359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.245377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.245614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.245856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.230 [2024-07-14 19:06:05.245890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.230 [2024-07-14 19:06:05.245907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.249483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.258796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.259187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.259217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.259235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.259472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.259713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.230 [2024-07-14 19:06:05.259736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.230 [2024-07-14 19:06:05.259751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.230 [2024-07-14 19:06:05.263342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.230 [2024-07-14 19:06:05.272685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.230 [2024-07-14 19:06:05.273116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.230 [2024-07-14 19:06:05.273147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.230 [2024-07-14 19:06:05.273165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.230 [2024-07-14 19:06:05.273402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.230 [2024-07-14 19:06:05.273643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.273667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.273681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.277269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.286580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.286996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.287027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.287044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.287282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.287523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.287546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.287561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.291142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.300419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.300863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.300934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.300951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.301189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.301440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.301463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.301479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.305062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.314345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.314724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.314755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.314773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.315019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.315261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.315284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.315299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.318871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.328383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.328796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.328827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.328844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.329096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.329338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.329361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.329376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.332980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.342271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.342643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.342674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.342691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.342938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.343181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.343204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.343218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.346799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.356303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.356702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.356732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.356749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.356997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.357239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.357263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.357277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.360848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.370365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.370763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.370793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.370810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.371056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.371299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.371322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.371337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.374913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.384225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.384621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.384652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.384670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.384916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.385157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.385181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.385195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.388765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.398270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.398620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.398650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.398673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.398923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.399176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.399199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.399214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.402785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.412296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.412691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.412722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.412739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.231 [2024-07-14 19:06:05.412989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.231 [2024-07-14 19:06:05.413231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.231 [2024-07-14 19:06:05.413254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.231 [2024-07-14 19:06:05.413268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.231 [2024-07-14 19:06:05.416846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.231 [2024-07-14 19:06:05.426142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.231 [2024-07-14 19:06:05.426522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.231 [2024-07-14 19:06:05.426552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.231 [2024-07-14 19:06:05.426569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.232 [2024-07-14 19:06:05.426805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.232 [2024-07-14 19:06:05.427060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.232 [2024-07-14 19:06:05.427084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.232 [2024-07-14 19:06:05.427099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.232 [2024-07-14 19:06:05.430694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.232 [2024-07-14 19:06:05.439998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.232 [2024-07-14 19:06:05.440393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.232 [2024-07-14 19:06:05.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.232 [2024-07-14 19:06:05.440441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.232 [2024-07-14 19:06:05.440678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.232 [2024-07-14 19:06:05.440933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.232 [2024-07-14 19:06:05.440962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.232 [2024-07-14 19:06:05.440977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.232 [2024-07-14 19:06:05.444554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.232 [2024-07-14 19:06:05.453852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.232 [2024-07-14 19:06:05.454248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.232 [2024-07-14 19:06:05.454278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.232 [2024-07-14 19:06:05.454295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.232 [2024-07-14 19:06:05.454532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.232 [2024-07-14 19:06:05.454774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.232 [2024-07-14 19:06:05.454796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.232 [2024-07-14 19:06:05.454811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.492 [2024-07-14 19:06:05.458401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.492 [2024-07-14 19:06:05.467696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.492 [2024-07-14 19:06:05.468103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.492 [2024-07-14 19:06:05.468134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.492 [2024-07-14 19:06:05.468151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.492 [2024-07-14 19:06:05.468388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.492 [2024-07-14 19:06:05.468630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.492 [2024-07-14 19:06:05.468653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.492 [2024-07-14 19:06:05.468668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.492 [2024-07-14 19:06:05.472271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.492 [2024-07-14 19:06:05.481580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.492 [2024-07-14 19:06:05.481998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.492 [2024-07-14 19:06:05.482029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.492 [2024-07-14 19:06:05.482046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.492 [2024-07-14 19:06:05.482284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.492 [2024-07-14 19:06:05.482526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.492 [2024-07-14 19:06:05.482549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.492 [2024-07-14 19:06:05.482564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.492 [2024-07-14 19:06:05.486157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.492 [2024-07-14 19:06:05.495452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.492 [2024-07-14 19:06:05.495861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.492 [2024-07-14 19:06:05.495900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.492 [2024-07-14 19:06:05.495919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.492 [2024-07-14 19:06:05.496156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.492 [2024-07-14 19:06:05.496398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.492 [2024-07-14 19:06:05.496420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.492 [2024-07-14 19:06:05.496434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.492 [2024-07-14 19:06:05.500023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.492 [2024-07-14 19:06:05.509314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.492 [2024-07-14 19:06:05.509709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.492 [2024-07-14 19:06:05.509740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.492 [2024-07-14 19:06:05.509757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.492 [2024-07-14 19:06:05.510008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.492 [2024-07-14 19:06:05.510251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.492 [2024-07-14 19:06:05.510274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.492 [2024-07-14 19:06:05.510289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.492 [2024-07-14 19:06:05.513866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.492 [2024-07-14 19:06:05.523164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.492 [2024-07-14 19:06:05.523557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.523587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.523604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.523842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.524095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.524119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.524133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.527709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.537020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.537389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.537436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.537679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.537935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.537959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.537973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.541552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.551064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.551437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.551467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.551484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.551721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.551976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.551999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.552015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.555588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.565091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.565485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.565516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.565533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.565770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.566023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.566047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.566062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.569635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3750630 Killed "${NVMF_APP[@]}" "$@" 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:17.493 [2024-07-14 19:06:05.578960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.493 [2024-07-14 19:06:05.579357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.579387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.579410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.579649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.579902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.579937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.579951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.583537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3751695 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3751695 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3751695 ']' 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:17.493 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.493 [2024-07-14 19:06:05.592833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.593246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.593277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.593296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.593534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.593776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.593799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.593814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.597396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.606688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.607092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.607124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.607141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.607377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.607619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.607642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.607662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.611253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.620543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.620920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.620952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.620969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.621207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.621448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.621471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.621486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.625067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.633701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:17.493 [2024-07-14 19:06:05.633779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.493 [2024-07-14 19:06:05.634589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.634981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.493 [2024-07-14 19:06:05.635012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.493 [2024-07-14 19:06:05.635030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.493 [2024-07-14 19:06:05.635268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.493 [2024-07-14 19:06:05.635510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.493 [2024-07-14 19:06:05.635533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.493 [2024-07-14 19:06:05.635548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.493 [2024-07-14 19:06:05.639125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.493 [2024-07-14 19:06:05.648518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.493 [2024-07-14 19:06:05.648892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.648923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.648941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.649179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.494 [2024-07-14 19:06:05.649421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.494 [2024-07-14 19:06:05.649444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.494 [2024-07-14 19:06:05.649459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.494 [2024-07-14 19:06:05.653045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.494 [2024-07-14 19:06:05.662533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.494 [2024-07-14 19:06:05.662951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.662983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.663000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.663238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.494 [2024-07-14 19:06:05.663479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.494 [2024-07-14 19:06:05.663503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.494 [2024-07-14 19:06:05.663518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.494 [2024-07-14 19:06:05.667102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.494 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.494 [2024-07-14 19:06:05.676011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.494 [2024-07-14 19:06:05.676382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.676409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.676424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.676631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.494 [2024-07-14 19:06:05.676851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.494 [2024-07-14 19:06:05.676894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.494 [2024-07-14 19:06:05.676908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.494 [2024-07-14 19:06:05.680150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.494 [2024-07-14 19:06:05.689423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.494 [2024-07-14 19:06:05.689791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.689818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.689834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.690058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.494 [2024-07-14 19:06:05.690302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.494 [2024-07-14 19:06:05.690322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.494 [2024-07-14 19:06:05.690335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.494 [2024-07-14 19:06:05.693403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.494 [2024-07-14 19:06:05.701786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:17.494 [2024-07-14 19:06:05.702873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.494 [2024-07-14 19:06:05.703286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.703334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.703350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.703607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.494 [2024-07-14 19:06:05.703812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.494 [2024-07-14 19:06:05.703832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.494 [2024-07-14 19:06:05.703844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.494 [2024-07-14 19:06:05.706954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.494 [2024-07-14 19:06:05.716549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.494 [2024-07-14 19:06:05.717074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.494 [2024-07-14 19:06:05.717113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.494 [2024-07-14 19:06:05.717142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.494 [2024-07-14 19:06:05.717380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.717606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.717627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.717658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.720809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.729938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.730378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.730406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.730421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.730679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.730914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.730936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.730951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.734021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.743407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.743774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.743803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.743819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.744074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.744309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.744330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.744343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.747411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.756817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.757342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.757379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.757397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.757629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.757843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.757887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.757905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.761009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.770213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.770624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.770670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.770687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.770939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.771151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.771172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.771185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.774271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.783653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.784081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.784110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.784125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.784367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.784573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.784593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.784606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.787692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.793375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.754 [2024-07-14 19:06:05.793409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.754 [2024-07-14 19:06:05.793438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.754 [2024-07-14 19:06:05.793450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.754 [2024-07-14 19:06:05.793460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.754 [2024-07-14 19:06:05.793519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:17.754 [2024-07-14 19:06:05.793579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:17.754 [2024-07-14 19:06:05.793582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.754 [2024-07-14 19:06:05.797271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.797673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.797719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.797736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.797980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.798201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.798222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.798238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.801509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.810980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.811505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.811542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.811561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.811783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.812014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.812037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.754 [2024-07-14 19:06:05.812052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.754 [2024-07-14 19:06:05.815335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.754 [2024-07-14 19:06:05.824610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.754 [2024-07-14 19:06:05.825147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.754 [2024-07-14 19:06:05.825197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.754 [2024-07-14 19:06:05.825216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.754 [2024-07-14 19:06:05.825450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.754 [2024-07-14 19:06:05.825682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.754 [2024-07-14 19:06:05.825703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.825719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.829010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.838304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.838851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.838907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.838938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.839163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.839395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.839417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.839433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.842705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.851923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.852335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.852373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.852392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.852614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.852834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.852856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.852872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.856098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.865512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.866063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.866107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.866127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.866350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.866572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.866593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.866609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.869849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.879106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.879540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.879579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.879597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.879817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.880045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.880067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.880083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.883346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.892752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.893094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.893123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.893139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.893354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.893571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.893592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.893605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.896820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.906303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.906650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.906678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.906694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.906915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.907133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.907154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.907168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.910378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.755 [2024-07-14 19:06:05.919956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.920307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.920335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.920351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.920565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.920783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.920804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.920818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 [2024-07-14 19:06:05.924069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.933502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.933847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.933882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.933900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.934125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 [2024-07-14 19:06:05.934353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.934374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.934388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.755 [2024-07-14 19:06:05.937647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.755 [2024-07-14 19:06:05.941743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.755 [2024-07-14 19:06:05.947044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.947382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.947410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.947426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.755 [2024-07-14 19:06:05.947639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.755 [2024-07-14 19:06:05.947857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.755 [2024-07-14 19:06:05.947885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.755 [2024-07-14 19:06:05.947908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.755 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:17.755 [2024-07-14 19:06:05.951174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.755 [2024-07-14 19:06:05.960585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.755 [2024-07-14 19:06:05.960972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.755 [2024-07-14 19:06:05.961000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.755 [2024-07-14 19:06:05.961016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.756 [2024-07-14 19:06:05.961230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.756 [2024-07-14 19:06:05.961448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.756 [2024-07-14 19:06:05.961469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.756 [2024-07-14 19:06:05.961482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.756 [2024-07-14 19:06:05.964746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:17.756 [2024-07-14 19:06:05.974247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:17.756 [2024-07-14 19:06:05.974719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.756 [2024-07-14 19:06:05.974760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:17.756 [2024-07-14 19:06:05.974778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:17.756 [2024-07-14 19:06:05.975009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:17.756 [2024-07-14 19:06:05.975230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.756 [2024-07-14 19:06:05.975252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.756 [2024-07-14 19:06:05.975269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.756 [2024-07-14 19:06:05.978594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.016 Malloc0 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.016 [2024-07-14 19:06:05.987825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:18.016 [2024-07-14 19:06:05.988183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.016 [2024-07-14 19:06:05.988211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6fed0 with addr=10.0.0.2, port=4420 00:34:18.016 [2024-07-14 19:06:05.988228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6fed0 is same with the state(5) to be set 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.016 [2024-07-14 19:06:05.988453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6fed0 (9): Bad file descriptor 00:34:18.016 [2024-07-14 19:06:05.988671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:18.016 [2024-07-14 19:06:05.988692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:18.016 [2024-07-14 19:06:05.988705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.016 [2024-07-14 19:06:05.991975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.016 19:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:18.016 [2024-07-14 19:06:05.999855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.016 [2024-07-14 19:06:06.001431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.016 19:06:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.016 19:06:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3750826 00:34:18.016 [2024-07-14 19:06:06.037452] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:27.995 00:34:27.995 Latency(us) 00:34:27.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:27.995 Verification LBA range: start 0x0 length 0x4000 00:34:27.995 Nvme1n1 : 15.01 6662.07 26.02 8547.12 0.00 8390.36 770.65 21359.88 00:34:27.995 =================================================================================================================== 00:34:27.995 Total : 6662.07 26.02 8547.12 0.00 8390.36 770.65 21359.88 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:27.995 rmmod nvme_tcp 00:34:27.995 rmmod nvme_fabrics 00:34:27.995 rmmod nvme_keyring 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3751695 ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3751695 ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3751695' 00:34:27.995 killing process with pid 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3751695 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:27.995 19:06:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.899 19:06:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:29.899 00:34:29.899 real 0m22.270s 00:34:29.899 user 1m0.173s 00:34:29.899 sys 0m4.026s 00:34:29.899 19:06:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:29.900 19:06:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 ************************************ 00:34:29.900 END TEST nvmf_bdevperf 00:34:29.900 ************************************ 00:34:29.900 19:06:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:29.900 19:06:17 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:29.900 19:06:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:29.900 19:06:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.900 19:06:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:29.900 ************************************ 00:34:29.900 START TEST nvmf_target_disconnect 00:34:29.900 ************************************ 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:29.900 * Looking for test storage... 00:34:29.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:29.900 19:06:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:31.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:31.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:31.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.801 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:31.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.802 19:06:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.802 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.802 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.802 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:31.802 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.060 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.060 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.060 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:32.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:34:32.060 00:34:32.060 --- 10.0.0.2 ping statistics --- 00:34:32.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.060 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:34:32.060 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:34:32.060 00:34:32.060 --- 10.0.0.1 ping statistics --- 00:34:32.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.060 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:34:32.060 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.061 ************************************ 00:34:32.061 START TEST nvmf_target_disconnect_tc1 00:34:32.061 ************************************ 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:32.061 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.061 [2024-07-14 19:06:20.213903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.061 [2024-07-14 19:06:20.213979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2231e70 with addr=10.0.0.2, port=4420 00:34:32.061 [2024-07-14 19:06:20.214019] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:32.061 [2024-07-14 19:06:20.214045] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:32.061 [2024-07-14 19:06:20.214060] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:32.061 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:32.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:32.061 Initializing NVMe Controllers 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:32.061 00:34:32.061 real 0m0.099s 00:34:32.061 user 0m0.033s 00:34:32.061 sys 0m0.064s 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:32.061 ************************************ 00:34:32.061 END TEST nvmf_target_disconnect_tc1 00:34:32.061 ************************************ 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.061 ************************************ 00:34:32.061 START TEST nvmf_target_disconnect_tc2 00:34:32.061 ************************************ 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3755356 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3755356 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3755356 ']' 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:32.061 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.321 [2024-07-14 19:06:20.329045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:32.321 [2024-07-14 19:06:20.329118] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.321 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.321 [2024-07-14 19:06:20.393704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.321 [2024-07-14 19:06:20.478344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.321 [2024-07-14 19:06:20.478402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.321 [2024-07-14 19:06:20.478435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.321 [2024-07-14 19:06:20.478446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.321 [2024-07-14 19:06:20.478455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.321 [2024-07-14 19:06:20.478541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:32.321 [2024-07-14 19:06:20.479010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:32.321 [2024-07-14 19:06:20.479068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:32.321 [2024-07-14 19:06:20.479071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 Malloc0 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 [2024-07-14 19:06:20.660703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 [2024-07-14 19:06:20.689074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3755380 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:32.581 19:06:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:32.581 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.484 19:06:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3755356 00:34:34.484 19:06:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Write completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.767 Read completed with error (sct=0, sc=8) 00:34:34.767 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 [2024-07-14 19:06:22.716539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 [2024-07-14 19:06:22.716846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 [2024-07-14 19:06:22.717150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Write completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 Read completed with error (sct=0, sc=8) 00:34:34.768 starting I/O failed 00:34:34.768 [2024-07-14 19:06:22.717477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:34.768 [2024-07-14 19:06:22.717741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.768 [2024-07-14 19:06:22.717782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.768 qpair failed and we were unable to recover it. 00:34:34.768 [2024-07-14 19:06:22.717895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.768 [2024-07-14 19:06:22.717923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.768 qpair failed and we were unable to recover it. 00:34:34.768 [2024-07-14 19:06:22.718025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.768 [2024-07-14 19:06:22.718051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.768 qpair failed and we were unable to recover it. 00:34:34.768 [2024-07-14 19:06:22.718156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.768 [2024-07-14 19:06:22.718186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.768 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.718305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.718331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.718486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.718512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.718616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.718641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.718750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.718776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.718886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.718912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.719783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.719977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.720841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.720881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.721861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.721896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.722837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.722976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.723933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.723959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.724060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.769 [2024-07-14 19:06:22.724085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.769 qpair failed and we were unable to recover it. 00:34:34.769 [2024-07-14 19:06:22.724214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.724240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.724363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.724388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.724545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.724570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.724659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.724684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.724809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.724838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.724974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.725860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.725983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.726959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.726998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.727171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.727308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.727540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.727718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.727883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.727986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.728138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.728299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.728454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.728716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.728883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.728920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.729881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.729909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.730040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.730066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.730166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.730194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.730319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.730345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.730490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.730533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.770 [2024-07-14 19:06:22.730639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.770 [2024-07-14 19:06:22.730666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.770 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.730832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.730859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.730984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.731887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.731926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.732912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.732939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.733930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.733957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.734835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.734865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.735870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.735908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.736992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.737019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.771 qpair failed and we were unable to recover it. 00:34:34.771 [2024-07-14 19:06:22.737137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.771 [2024-07-14 19:06:22.737163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.737263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.737468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.737497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.737678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.737704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.737834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.737860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.737989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.738848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.738882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.739010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.739036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.739164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.739190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.739312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.739354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.739553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.739619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.739854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.739894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.740864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.740998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.741142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.741409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.741559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.741734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.741931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.741959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.742074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.742100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.742231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.742257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.742390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.772 [2024-07-14 19:06:22.742417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.772 qpair failed and we were unable to recover it. 00:34:34.772 [2024-07-14 19:06:22.742549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.742574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.742712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.742756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.742886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.742913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.743823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.743985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.744815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.744976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.745943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.745983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.746166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.746326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.746513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.746633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.746811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.746961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.747881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.747980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.748006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.748101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.748127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.748251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.773 [2024-07-14 19:06:22.748277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.773 qpair failed and we were unable to recover it. 00:34:34.773 [2024-07-14 19:06:22.748379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.748406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.748509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.748535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.748676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.748751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.748934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.748963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.749884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.749911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.750874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.750920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.751919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.751965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.752874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.752906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.753969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.753995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.774 qpair failed and we were unable to recover it. 00:34:34.774 [2024-07-14 19:06:22.754857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.774 [2024-07-14 19:06:22.754902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.755845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.755996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.756139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.756293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.756442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.756675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.756859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.756908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.757963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.757990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.758091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.758117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.758270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.758312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.758477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.758503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.758633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.758658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.758806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.758834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.759938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.759965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.760850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.760995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.761034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.761176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.761207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.775 [2024-07-14 19:06:22.761370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.775 [2024-07-14 19:06:22.761399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.775 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.761538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.761582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.761712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.761741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.761935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.762955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.762981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.763911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.763936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.764846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.764874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.765819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.765848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.776 qpair failed and we were unable to recover it. 00:34:34.776 [2024-07-14 19:06:22.766894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.776 [2024-07-14 19:06:22.766922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.767068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.767111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.767255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.767298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.767511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.767576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.767697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.767728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.767865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.767901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.768039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.768067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.768177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.768206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.768401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.768463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.768710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.768762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.768913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.768944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.769918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.769957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.770888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.770914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.771971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.771997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.772968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.772995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.773100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.773126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.773280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.773310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.773435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.773480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.773587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.773617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.777 [2024-07-14 19:06:22.773734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.777 [2024-07-14 19:06:22.773776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.777 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.773873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.773907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.774858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.774892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.775900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.775926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.776902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.776945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.777937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.777976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.778124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.778163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.778294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.778339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.778528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.778580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.778681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.778709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.778863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.779963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.779989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.780110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.780136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.780289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.778 [2024-07-14 19:06:22.780317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.778 qpair failed and we were unable to recover it. 00:34:34.778 [2024-07-14 19:06:22.780515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.780543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.780659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.780688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.780798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.780827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.780995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.781035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.781193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.781240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.781394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.781437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.781610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.781655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.781805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.781831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.781976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.782006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.782242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.782294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.782524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.782582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.782686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.782712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.782864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.782897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.782999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.783873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.783997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.784944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.784970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.785920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.785947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.786097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.786141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.786309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.786374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.786594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.786645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.786794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.786819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.786952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.786980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.787110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.787140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.787297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.787529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.787600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.779 [2024-07-14 19:06:22.787736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.779 [2024-07-14 19:06:22.787764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.779 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.787874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.787924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.788839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.788867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.789925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.789952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.790955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.790981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.791098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.791123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.791285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.791327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.791533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.791561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.791672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.791704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.791841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.791869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.792899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.792925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.793964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.793990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.794130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.794169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.794291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.794336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.794461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.794504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.780 [2024-07-14 19:06:22.794657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.780 [2024-07-14 19:06:22.794683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.780 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.794811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.794837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.794968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.794995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.795911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.795937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.796869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.796902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.797955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.797982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.798920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.798946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.781 [2024-07-14 19:06:22.799852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.781 qpair failed and we were unable to recover it. 00:34:34.781 [2024-07-14 19:06:22.799986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.800110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.800245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.800410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.800621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.800811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.800839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.801925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.801951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.802946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.802972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.803948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.803974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.804093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.804240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.804666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.804854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.804976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.805967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.805993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.806110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.806135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.806261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.806302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.782 [2024-07-14 19:06:22.806415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.782 [2024-07-14 19:06:22.806444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.782 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.806561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.806601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.806777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.806811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.806966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.806993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.807949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.807976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.808949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.808975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.809956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.809986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.810204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.810258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.810386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.810415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.810569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.810613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.810736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.810763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.810867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.810905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.811907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.811934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.812873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.812904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.813029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.813055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.813155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.783 [2024-07-14 19:06:22.813180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.783 qpair failed and we were unable to recover it. 00:34:34.783 [2024-07-14 19:06:22.813338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.813364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.813492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.813518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.813645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.813671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.813798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.813829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.813957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.813984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.814963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.814991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.815931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.815961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.816130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.816158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.816293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.816321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.816458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.816486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.816621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.816790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.816818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.817834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.817861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.818871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.818905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.784 qpair failed and we were unable to recover it. 00:34:34.784 [2024-07-14 19:06:22.819943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.784 [2024-07-14 19:06:22.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.820889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.820915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.821938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.821964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.822935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.822961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.823922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.823962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.824873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.824904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.825031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.825056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.825180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.825205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.825332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.785 [2024-07-14 19:06:22.825358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.785 qpair failed and we were unable to recover it. 00:34:34.785 [2024-07-14 19:06:22.825460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.825485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.825575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.825600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.825730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.825757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.825886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.825913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.826895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.826925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.827873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.827982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.828865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.828997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.829175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.829304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.829493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.829651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.829847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.829881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.830928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.830957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.831110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.831137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.831286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.831330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.831480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.831522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.831675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.831702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.831828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.786 [2024-07-14 19:06:22.831853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.786 qpair failed and we were unable to recover it. 00:34:34.786 [2024-07-14 19:06:22.832004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.832884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.832983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.833147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.833338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.833470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.833637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.833797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.833825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.834972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.834998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.835132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.835157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.835300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.835328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.835463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.835491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.835704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.835732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.835861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.835901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.836024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.836049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.836215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.836243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.836388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.836416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.836557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.836585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.836802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.836857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.837053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.837247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.837452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.837656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.837814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.837984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.838951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.787 [2024-07-14 19:06:22.838979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.787 qpair failed and we were unable to recover it. 00:34:34.787 [2024-07-14 19:06:22.839080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.839107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.839252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.839296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.839450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.839493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.839625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.839650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.839800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.839826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.839994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.840166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.840360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.840489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.840661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.840822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.840850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.841872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.841903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.842884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.842910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.843839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.843870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.844109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.844302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.844517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.844688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.844846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.844975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.845150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.845288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.845495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.845625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.788 qpair failed and we were unable to recover it. 00:34:34.788 [2024-07-14 19:06:22.845760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.788 [2024-07-14 19:06:22.845788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.845932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.845958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.846868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.846922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.847083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.847111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.847283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.847326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.847509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.847556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.847727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.847770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.847899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.847926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.848941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.848980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.849899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.849926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.850893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.850936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.851049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.851077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.851209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.851237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.851396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.851424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.789 qpair failed and we were unable to recover it. 00:34:34.789 [2024-07-14 19:06:22.851590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.789 [2024-07-14 19:06:22.851635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.851789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.851815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.851937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.851968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.852948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.852973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.853886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.853913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.854897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.854923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.855892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.855935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.856857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.856896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.857854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.857888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.858000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.858025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.858124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.790 [2024-07-14 19:06:22.858149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.790 qpair failed and we were unable to recover it. 00:34:34.790 [2024-07-14 19:06:22.858297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.858326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.858454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.858482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.858591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.858619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.858754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.858782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.858941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.858980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.859947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.859975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.860972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.860999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.861934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.861960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.862896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.862937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.863924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.863949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.864070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.864095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.864226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.864267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.864432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.864459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.791 [2024-07-14 19:06:22.864655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.791 [2024-07-14 19:06:22.864682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.791 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.864842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.864870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.864996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.865828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.865986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.866896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.866938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.867845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.867999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.868936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.868962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.869938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.869964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.870932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.870958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.871098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.871123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.792 qpair failed and we were unable to recover it. 00:34:34.792 [2024-07-14 19:06:22.871247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.792 [2024-07-14 19:06:22.871272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.871410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.871438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.871573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.871601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.871739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.871766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.871948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.871988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.872149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.872351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.872545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.872698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.872851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.872983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.873155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.873345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.873534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.873724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.873861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.873896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.874913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.874939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.875949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.875975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.876861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.876895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.877866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.877896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.878025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.793 [2024-07-14 19:06:22.878050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.793 qpair failed and we were unable to recover it. 00:34:34.793 [2024-07-14 19:06:22.878140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.878277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.878436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.878602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.878782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.878948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.878974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.879890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.879996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.880854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.880987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.881935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.881960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.882944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.882970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.883058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.883083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.883240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.883265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.883405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.794 [2024-07-14 19:06:22.883433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.794 qpair failed and we were unable to recover it. 00:34:34.794 [2024-07-14 19:06:22.883569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.883597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.883757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.883785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.883925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.883955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.884945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.884975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.885954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.885979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.886942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.886969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.887840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.887997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.888173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.888325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.888501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.888674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.888824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.888849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.889866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.889915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.890044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.795 [2024-07-14 19:06:22.890229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.795 [2024-07-14 19:06:22.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.795 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.890348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.890373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.890491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.890520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.890674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.890699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.890825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.890850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.891847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.891872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.892893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.892935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.893883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.893926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.894829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.894854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.895928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.895958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.796 qpair failed and we were unable to recover it. 00:34:34.796 [2024-07-14 19:06:22.896888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.796 [2024-07-14 19:06:22.896916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.897905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.897933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.898830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.898987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.899910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.899935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.900869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.900906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.901851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.901991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.902863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.902896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.903051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.903093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.903239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.797 [2024-07-14 19:06:22.903265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.797 qpair failed and we were unable to recover it. 00:34:34.797 [2024-07-14 19:06:22.903388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.903414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.903596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.903625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.903749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.903775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.903867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.903900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.904875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.904910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.905838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.905865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.906942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.906970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.907917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.907947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.908872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.908903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.798 qpair failed and we were unable to recover it. 00:34:34.798 [2024-07-14 19:06:22.909077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.798 [2024-07-14 19:06:22.909105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.909225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.909251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.909366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.909392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.909497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.909525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.909697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.909722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.909845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.909904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.910895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.910924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.911960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.911987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.912110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.912135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.912295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.912323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.912498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.912523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.912651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.912694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.912847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.912898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.913911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.913940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.914931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.914958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.915065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.915093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.915267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.915296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.915437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.799 [2024-07-14 19:06:22.915463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.799 qpair failed and we were unable to recover it. 00:34:34.799 [2024-07-14 19:06:22.915558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.915584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.915705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.915734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.915924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.915950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.916940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.916969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.917116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.917145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.917293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.917319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.917442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.917483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.917612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.917641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.917813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.917839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.918950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.918978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.919916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.919943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.920897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.920940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.921094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.921247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.921413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.921664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.921844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.921978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.922004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.800 [2024-07-14 19:06:22.922101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.800 [2024-07-14 19:06:22.922127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.800 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.922298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.922324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.922447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.922489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.922625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.922653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.922800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.922826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.922976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.923920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.923947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.924880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.924907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.925874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.925910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.926773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.926812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.927850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.927875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.928037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.928216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.928377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.801 [2024-07-14 19:06:22.928716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.801 qpair failed and we were unable to recover it. 00:34:34.801 [2024-07-14 19:06:22.928824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.928852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.928964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.928991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.929918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.929962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.930911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.930963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.931144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.931267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.931691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.931844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.931988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.932119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.932251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.932486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.932668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.932835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.932865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.933815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.933859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.934028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.934057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.934182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.934225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.934394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.934449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.802 [2024-07-14 19:06:22.934590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.802 [2024-07-14 19:06:22.934640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.802 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.934755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.934781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.934905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.934931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.935886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.935917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.936846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.936872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.937959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.937986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.938115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.938157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.938307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.938335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.938476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.938525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.938680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.938873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.938909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.939958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.939984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.940110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.940276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.940442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.940639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.940837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.940992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.941018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.941128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.803 [2024-07-14 19:06:22.941154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.803 qpair failed and we were unable to recover it. 00:34:34.803 [2024-07-14 19:06:22.941272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.941298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.941391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.941417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.941535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.941564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.941700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.941729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.941868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.941899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.942851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.942882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.943855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.943886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.944828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.944857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.945826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.945852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.946842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.946870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.947035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.804 [2024-07-14 19:06:22.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.804 qpair failed and we were unable to recover it. 00:34:34.804 [2024-07-14 19:06:22.947181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.947224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.947333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.947361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.947502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.947531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.947708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.947733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.947836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.947885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.948955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.948982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.949933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.949963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.950886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.950912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.951886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.951915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.952842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.952987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.953013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.953139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.953167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.953313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.953340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.953475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.953502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.805 qpair failed and we were unable to recover it. 00:34:34.805 [2024-07-14 19:06:22.953631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.805 [2024-07-14 19:06:22.953674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.953814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.953843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.953961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.953991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.954932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.954962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.955934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.955964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.956934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.956976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.957963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.957990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.958210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.958551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.958854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.958986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.959828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.959978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.960007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.960156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.960183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.806 [2024-07-14 19:06:22.960309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.806 [2024-07-14 19:06:22.960335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.806 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.960481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.960509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.960647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.960676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.960852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.960883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.961867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.961917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.962887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.962914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.963901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.963930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.964866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.964979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.807 [2024-07-14 19:06:22.965908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.807 [2024-07-14 19:06:22.965939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.807 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.966900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.966927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.967986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.968935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.968964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.969969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.969995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.970174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.970200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.970637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.970799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.808 [2024-07-14 19:06:22.970826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:34.808 qpair failed and we were unable to recover it. 00:34:34.808 [2024-07-14 19:06:22.970933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.970960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.971825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.971869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.972895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.972924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.973874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.973917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.974014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.974040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.094 [2024-07-14 19:06:22.974165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.094 [2024-07-14 19:06:22.974194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.094 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.974315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.974343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.974490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.974516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.974613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.974638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.974806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.974835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.974981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.975854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.975889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.976870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.977970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.977999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.978963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.978990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.979839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.979868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.980034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.980060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.980196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.980222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.095 [2024-07-14 19:06:22.980324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.095 [2024-07-14 19:06:22.980350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.095 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.980451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.980495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.980614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.980640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.980734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.980760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.980911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.980940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.981836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.981861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.982822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.982850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.983923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.983950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.984802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.984831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.985936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.985962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.986087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.986116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.986242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.986268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.986397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.986424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.986570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.096 [2024-07-14 19:06:22.986600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.096 qpair failed and we were unable to recover it. 00:34:35.096 [2024-07-14 19:06:22.986727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.986755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.986874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.986909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.987886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.987916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.988840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.988865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.989950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.989977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.990887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.990913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.991913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.991940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.097 qpair failed and we were unable to recover it. 00:34:35.097 [2024-07-14 19:06:22.992941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.097 [2024-07-14 19:06:22.992971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.993891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.993997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.994973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.995970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.995996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.996967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.996993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.997858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.098 [2024-07-14 19:06:22.997895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.098 qpair failed and we were unable to recover it. 00:34:35.098 [2024-07-14 19:06:22.998005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.998865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.998982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:22.999923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:22.999956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.000824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.000852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.001927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.001954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.002973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.002999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.003916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.003953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.004084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.004112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.004234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.099 [2024-07-14 19:06:23.004259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.099 qpair failed and we were unable to recover it. 00:34:35.099 [2024-07-14 19:06:23.004352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.004377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.004559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.004584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.004713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.004738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.004866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.004897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.005969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.005995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.006840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.006865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.007969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.007998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.008874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.008910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.009943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.009970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.010087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.010115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.010249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.010277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.010427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.100 [2024-07-14 19:06:23.010453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.100 qpair failed and we were unable to recover it. 00:34:35.100 [2024-07-14 19:06:23.010553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.010578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.010699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.010727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.010844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.010871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.011852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.011882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.012848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.012872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.013953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.013980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.014945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.014971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.015852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.015882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.101 qpair failed and we were unable to recover it. 00:34:35.101 [2024-07-14 19:06:23.016804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.101 [2024-07-14 19:06:23.016830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.016983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.017937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.017961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.018843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.018869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.019820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.019978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.020916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.020947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.021923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.021953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.022834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.102 qpair failed and we were unable to recover it. 00:34:35.102 [2024-07-14 19:06:23.022972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.102 [2024-07-14 19:06:23.023001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.023868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.023997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.024936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.024965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.025839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.025863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.026863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.026906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.027968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.027994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.028092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.028117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.028220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.028245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.103 qpair failed and we were unable to recover it. 00:34:35.103 [2024-07-14 19:06:23.028343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.103 [2024-07-14 19:06:23.028368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.028486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.028513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.028629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.028657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.028780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.028805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.028908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.028934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.029866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.029999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.030888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.031849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.031873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.032840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.032993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.033867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.033897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.034020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.034048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.034198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.034222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.034318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.034343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.034445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.034471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.104 qpair failed and we were unable to recover it. 00:34:35.104 [2024-07-14 19:06:23.034574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.104 [2024-07-14 19:06:23.034618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.034777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.034803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.034928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.034954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.035889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.035918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.036844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.036983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.037961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.037986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.038814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.038838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.039933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.039958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.040854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.040888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.105 qpair failed and we were unable to recover it. 00:34:35.105 [2024-07-14 19:06:23.041025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.105 [2024-07-14 19:06:23.041053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.041924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.041949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.042916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.042944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.043832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.043856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.044903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.044929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.045847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.045875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.046837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.046992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.047018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.047142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.106 [2024-07-14 19:06:23.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.106 qpair failed and we were unable to recover it. 00:34:35.106 [2024-07-14 19:06:23.047322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.047347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.047448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.047473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.047627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.047656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.047763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.047790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.047899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.047926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.048931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.048959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.049887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.049930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.050862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.050986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.051928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.051958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.052056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.052084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.052247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.052273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.052397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.052436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.052544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.052572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.107 qpair failed and we were unable to recover it. 00:34:35.107 [2024-07-14 19:06:23.052709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.107 [2024-07-14 19:06:23.052738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.052839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.052865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.052970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.052995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.053972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.053997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.054900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.054929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.055929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.055955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.056845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.056873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.057881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.057905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.108 [2024-07-14 19:06:23.058841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.108 qpair failed and we were unable to recover it. 00:34:35.108 [2024-07-14 19:06:23.058969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.058995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.059858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.059893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.060939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.060963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.061861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.061992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.062938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.062969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.063891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.063990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.064921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.064950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.109 [2024-07-14 19:06:23.065063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.109 [2024-07-14 19:06:23.065088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.109 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.065910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.065936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.066893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.066918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.067904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.067932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.068867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.068920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.069841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.069978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.070950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.070977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.071070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.071094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.071225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.071251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.110 qpair failed and we were unable to recover it. 00:34:35.110 [2024-07-14 19:06:23.071373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.110 [2024-07-14 19:06:23.071415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.071533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.071564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.071706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.071742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.071884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.071917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.072832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.072993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.073143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.073279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.073436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.073621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.073823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.073852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.074848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.074888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.075883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.075910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.076936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.076980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.111 [2024-07-14 19:06:23.077087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.111 [2024-07-14 19:06:23.077112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.111 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.077244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.077270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.077360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.077393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.077547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.077576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.077698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.077727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.077866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.077900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.078963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.078990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.079968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.079994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.080928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.080955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.081915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.081945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.082889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.082917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.083032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.083057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.083176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.083202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.083327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.083357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.083501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.112 [2024-07-14 19:06:23.083531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.112 qpair failed and we were unable to recover it. 00:34:35.112 [2024-07-14 19:06:23.083656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.083683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.083807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.083834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.083950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.083980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.084927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.084956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.085889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.085914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.086946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.086971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.087941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.087967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.088846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.088977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.089964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.089991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.113 [2024-07-14 19:06:23.090094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.113 [2024-07-14 19:06:23.090123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.113 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.090222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.090248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.090416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.090445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.090547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.090575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.090697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.090725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.090861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.090892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.091867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.091901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.092950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.092977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.093132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.093311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.093470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.093625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.093831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.093973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.094954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.094983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.095913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.095939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.096073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.096101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.096200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.096226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.096358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.096387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.096534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.114 [2024-07-14 19:06:23.096561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.114 qpair failed and we were unable to recover it. 00:34:35.114 [2024-07-14 19:06:23.096689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.096715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.096882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.096911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.097945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.097978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.098859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.098894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.099912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.099939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.100867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.100911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.101914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.101940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.115 [2024-07-14 19:06:23.102968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.115 [2024-07-14 19:06:23.102994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.115 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.103925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.103955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.104888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.104934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.105957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.105983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.106846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.106874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.107079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.107249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.107385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.107555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.116 [2024-07-14 19:06:23.107716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.116 qpair failed and we were unable to recover it. 00:34:35.116 [2024-07-14 19:06:23.107840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.107866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.108843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.108984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.109920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.109953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.110926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.110956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.111853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.111889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.112956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.112983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.113883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.117 qpair failed and we were unable to recover it. 00:34:35.117 [2024-07-14 19:06:23.114053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.117 [2024-07-14 19:06:23.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.114212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.114237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.114418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.114447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.114586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.114615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.114728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.114753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.114892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.114919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.115891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.115925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.116915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.116943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.117889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.117919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.118869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.118922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.119961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.119987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.120116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.120143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.120332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.120361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.120470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.120499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.120633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.118 [2024-07-14 19:06:23.120658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.118 qpair failed and we were unable to recover it. 00:34:35.118 [2024-07-14 19:06:23.120814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.120857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.120979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.121897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.121925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.122968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.122997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.123175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.123201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.123313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.123355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.123504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.123533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.123660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.123696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.123840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.123883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.124817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.124983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.125962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.125988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.126901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.126927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.127064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.119 [2024-07-14 19:06:23.127090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.119 qpair failed and we were unable to recover it. 00:34:35.119 [2024-07-14 19:06:23.127183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.127324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.127523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.127676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.127820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.127970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.127997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.128928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.128957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.129080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.129112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.129289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.129321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.129485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.129514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.129650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.129676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.129812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.129837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.130014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bdf20 is same with the state(5) to be set 00:34:35.120 [2024-07-14 19:06:23.130228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.130271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.130401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.130428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.130583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.130627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.130791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.130820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.130971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.130998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.131924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.131950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.132082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.132201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.132330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.132519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.120 [2024-07-14 19:06:23.132693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.120 qpair failed and we were unable to recover it. 00:34:35.120 [2024-07-14 19:06:23.132818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.132844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.133914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.133944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.134928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.134958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.135825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.135991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.136852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.136989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.137905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.137933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.138843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.121 [2024-07-14 19:06:23.138976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.121 [2024-07-14 19:06:23.139003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.121 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.139906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.139932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.140966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.140998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.141857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.141898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.142853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.142891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.143894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.143922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.144955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.144982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.145087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.145131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.145237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.122 [2024-07-14 19:06:23.145264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.122 qpair failed and we were unable to recover it. 00:34:35.122 [2024-07-14 19:06:23.145362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.145387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.145509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.145534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.145632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.145658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.145756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.145781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.145920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.145947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.146871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.146903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.147872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.147921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.148826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.148988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.149946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.149973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.150888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.150932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.151035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.151061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.151187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.123 [2024-07-14 19:06:23.151213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.123 qpair failed and we were unable to recover it. 00:34:35.123 [2024-07-14 19:06:23.151305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.151331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.151455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.151484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.151600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.151641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.151806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.151834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.151977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.152902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.152928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.153887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.153913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.154934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.154960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.155866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.155990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.156016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.124 [2024-07-14 19:06:23.156143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.124 [2024-07-14 19:06:23.156170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.124 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.156270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.156296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.156446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.156477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.156675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.156704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.156815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.156844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.156971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.156997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.157887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.157930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.158920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.158960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.159884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.159910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.160835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.160861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.161869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.161903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.125 qpair failed and we were unable to recover it. 00:34:35.125 [2024-07-14 19:06:23.162001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.125 [2024-07-14 19:06:23.162028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.162140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.162169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.162357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.162402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.162514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.162558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.162685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.162727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.162917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.162945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.163901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.163928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.164843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.164871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.165829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.165858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.166901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.166942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.167831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.167859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.168003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.168042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.168178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.168205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.168374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.168402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.168534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.168562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.126 [2024-07-14 19:06:23.168702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.126 [2024-07-14 19:06:23.168730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.126 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.168844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.168874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.168995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.169848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.169999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.170956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.170982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.171104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.171130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.171287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.171316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.171510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.171538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.171644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.171672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.171805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.171834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.172907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.172933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.173958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.173985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.174140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.174167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.174316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.174347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.174524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.174552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.174692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.174721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.174832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.174858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.127 [2024-07-14 19:06:23.175007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.127 [2024-07-14 19:06:23.175046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.127 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.175175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.175205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.175426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.175462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.175576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.175605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.175718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.175746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.175905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.175945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.176947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.176973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.177949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.177975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.178864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.178898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.179839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.179973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.180957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.180985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.181103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.181142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.181300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.181345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.181488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.181534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.181679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.128 [2024-07-14 19:06:23.181724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.128 qpair failed and we were unable to recover it. 00:34:35.128 [2024-07-14 19:06:23.181826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.181852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.182860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.182979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.183152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.183342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.183540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.183738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.183909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.183950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.184890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.184933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.185945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.185971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.186958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.186997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.187097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.187123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.187295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.187323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.187461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.129 [2024-07-14 19:06:23.187489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.129 qpair failed and we were unable to recover it. 00:34:35.129 [2024-07-14 19:06:23.187694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.187750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.187888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.187949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.188135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.188357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.188524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.188692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.188866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.188983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.189100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.189265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.189522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.189682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.189837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.189863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.190844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.190872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.191808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.191836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.192882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.192911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.193060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.193085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.193244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.193288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.193435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.193484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.193687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.193736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.193888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.193932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.194068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.194094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.194251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.194277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.194372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.194416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.194599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.194652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.130 qpair failed and we were unable to recover it. 00:34:35.130 [2024-07-14 19:06:23.194776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.130 [2024-07-14 19:06:23.194803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.194912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.194945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.195852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.195887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.196866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.196903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.197856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.197894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.198866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.198917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.199847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.199873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.200010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.131 [2024-07-14 19:06:23.200036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.131 qpair failed and we were unable to recover it. 00:34:35.131 [2024-07-14 19:06:23.200130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.200281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.200464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.200609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.200745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.200924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.200950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.201924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.201967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.202124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.202151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.202324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.202376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.202540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.202595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.202734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.202777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.202913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.202941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.203892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.203919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.204901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.204926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.205843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.205909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.206066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.206097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.206202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.206381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.206422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.206551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.206606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.132 [2024-07-14 19:06:23.206752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.132 [2024-07-14 19:06:23.206778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.132 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.206903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.206929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.207885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.207913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.208814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.208968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.209896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.209940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.210850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.211889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.211947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.212055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.212081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.133 [2024-07-14 19:06:23.212197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.133 [2024-07-14 19:06:23.212236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.133 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.212367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.212399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.212574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.212601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.212711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.212738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.212864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.212897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.212993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.213141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.213320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.213515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.213717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.213902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.213946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.214919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.214949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.215089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.215115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.215240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.215266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.215446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.215500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.215654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.215845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.215874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.216962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.216989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.217901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.217928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.218039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.134 [2024-07-14 19:06:23.218066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.134 qpair failed and we were unable to recover it. 00:34:35.134 [2024-07-14 19:06:23.218187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.218215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.218347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.218375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.218507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.218534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.218682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.218712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.218834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.218859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.219897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.219923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.220895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.220922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.221950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.221976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.222909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.222936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.223934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.223960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.224088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.224114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.224251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.224277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.224372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.224398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.224552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.135 [2024-07-14 19:06:23.224579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.135 qpair failed and we were unable to recover it. 00:34:35.135 [2024-07-14 19:06:23.224713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.224739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.224853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.224916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.225965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.225991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.226885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.226914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.227853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.227889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.228895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.228988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.229969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.229995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.230086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.230111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.230210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.230235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.230354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.230379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.230470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.136 [2024-07-14 19:06:23.230495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.136 qpair failed and we were unable to recover it. 00:34:35.136 [2024-07-14 19:06:23.230638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.230666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.230802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.230827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.230925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.230951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.231916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.231945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.232820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.232996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.233906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.233932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.234889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.234916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.235945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.235971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.137 [2024-07-14 19:06:23.236730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.137 qpair failed and we were unable to recover it. 00:34:35.137 [2024-07-14 19:06:23.236872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.236906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.237895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.237921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.238942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.238968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.239918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.239945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.240956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.240982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.241103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.241129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.241277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.241305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.241435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.241460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.138 [2024-07-14 19:06:23.241585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.138 [2024-07-14 19:06:23.241610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.138 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.241716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.241743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.241882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.241908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.242872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.242979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.243933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.243958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.244865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.244899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.245932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.245958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.246851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.246887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.247026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.247065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.247193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.139 [2024-07-14 19:06:23.247226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.139 qpair failed and we were unable to recover it. 00:34:35.139 [2024-07-14 19:06:23.247357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.247401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.247539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.247583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.247737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.247780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.247885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.247913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.248873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.248984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.249890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.249917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.250866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.250930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.251852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.251917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.252908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.252948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.253890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.253985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.254012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-14 19:06:23.254113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.140 [2024-07-14 19:06:23.254139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.254280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.254323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.254460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.254488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.254598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.254628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.254791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.254821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.254971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.254999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.255946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.255975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.256927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.256954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.257945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.257980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.258142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.258181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.258344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.258402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.258533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.258562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.258723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.258751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.258904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.258943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.259862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.259894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.260864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.260989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.261015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.261135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.261160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.261266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.141 [2024-07-14 19:06:23.261291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-14 19:06:23.261450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.261634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.261662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.261810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.261836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.262826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.262998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.263134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.263285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.263460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.263663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.263843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.263872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.264914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.264977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.265948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.265976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.266136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.266162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.266260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.266286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.266413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.266439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.266620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.266675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.266835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.266862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.267058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.267278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.267506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.267688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.142 [2024-07-14 19:06:23.267852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.142 [2024-07-14 19:06:23.267971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.267997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.268134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.268175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.268276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.268305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.268480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.268530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.268674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.268703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.268890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.268946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.269909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.269938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.270949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.270976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.271866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.271927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.272044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.272076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.272214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.272245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.272376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.272404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.272605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.272655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.272785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.272814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.273825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.273864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.274869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.274914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.275049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.275077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.275183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.275211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.143 [2024-07-14 19:06:23.275382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.143 [2024-07-14 19:06:23.275436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.143 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.275544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.275588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.275739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.275766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.275895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.275922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.276901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.276929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.277913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.277940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.278070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.278096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.278248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.278276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.278395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.278450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.278638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.278701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.278847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.278889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.279968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.279993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.280114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.280140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.280271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.280314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.280526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.280575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.280716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.280745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.280871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.280903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.281920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.281947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.282095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.282322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.282523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.282690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.282857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.282991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.283017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.144 qpair failed and we were unable to recover it. 00:34:35.144 [2024-07-14 19:06:23.283119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.144 [2024-07-14 19:06:23.283146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.283323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.283352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.283514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.283555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.283740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.283770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.283931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.283957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.284950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.284978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.285100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.285127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.285250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.285292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.285422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.285450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.285681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.285734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.285895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.285938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.286843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.286870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.145 [2024-07-14 19:06:23.287025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.145 [2024-07-14 19:06:23.287050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.145 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.287169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.287321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.287467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.287694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.287856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.287989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.288015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.288111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.288136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.288255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.288281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.146 [2024-07-14 19:06:23.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.146 [2024-07-14 19:06:23.288460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.146 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.288626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.288655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.288755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.288784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.288953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.288981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.289105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.289130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.289250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.289291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.289471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.289509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.147 [2024-07-14 19:06:23.289734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.147 [2024-07-14 19:06:23.289762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.147 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.289942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.289968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.290842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.148 [2024-07-14 19:06:23.290869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.148 qpair failed and we were unable to recover it. 00:34:35.148 [2024-07-14 19:06:23.291025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.291180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.291325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.291506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.291660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.291843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.291870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.292009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.292034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.292155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.292180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.292302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.149 [2024-07-14 19:06:23.292344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.149 qpair failed and we were unable to recover it. 00:34:35.149 [2024-07-14 19:06:23.292446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.292474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.292601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.292630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.292782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.292808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.292928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.292954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.293055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.293081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.293233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.150 [2024-07-14 19:06:23.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.150 qpair failed and we were unable to recover it. 00:34:35.150 [2024-07-14 19:06:23.293463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-14 19:06:23.293512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-14 19:06:23.293636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-14 19:06:23.293661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-14 19:06:23.293780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-14 19:06:23.293805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.151 [2024-07-14 19:06:23.293961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.151 [2024-07-14 19:06:23.293993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.151 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.294131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.294158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.294278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.294304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.294488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.294516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.294688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.294715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.294809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.294837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.295904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.295932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.296837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.296862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.297863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.297905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.298887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.298913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.299892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.299996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.300913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.300940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.445 qpair failed and we were unable to recover it. 00:34:35.445 [2024-07-14 19:06:23.301874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.445 [2024-07-14 19:06:23.301911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.302861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.302898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.303027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.303053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.303184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.303211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.303357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.303385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.303713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.303758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.303895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.303927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.304962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.304993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.305176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.305353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.305504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.305718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.305901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.305996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.306845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.306994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.307205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.307377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.307550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.307697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.307828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.307854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.308882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.308995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.309933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.309965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.310897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.310934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.446 qpair failed and we were unable to recover it. 00:34:35.446 [2024-07-14 19:06:23.311823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.446 [2024-07-14 19:06:23.311852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.311990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.312021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.312149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.312175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.312374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.312427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.312575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.312600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.312777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.312830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.312993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.313882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.313909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.314918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.314948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.315955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.315981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.316860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.317934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.317960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.318888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.318914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.319854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.319883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.320069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.320253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.320456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.320668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.320856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.320996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.321022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.321165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.321194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.447 [2024-07-14 19:06:23.321336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.447 [2024-07-14 19:06:23.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.447 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.321487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.321514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.321692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.321721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.321840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.321866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.322829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.322855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.323830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.323857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.324853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.325861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.325992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.326924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.326952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.327920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.327946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.328890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.328939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.329871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.329942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.330885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.330936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.331043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.331071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.331220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.331245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.331398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.331440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.331551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.331580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.448 [2024-07-14 19:06:23.331727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.448 [2024-07-14 19:06:23.331758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.448 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.331855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.331888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.332839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.332868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.333881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.333910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.334845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.334873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.335854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.335888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.336889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.336925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.337912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.337948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.338821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.338850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.339866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.339911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.340942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.340986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.341874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.341906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.342035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.342061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.342221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.342263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.342412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.342437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.342565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.342591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.449 [2024-07-14 19:06:23.342762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.449 [2024-07-14 19:06:23.342788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.449 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.342906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.342933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.343956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.343982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.344926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.344952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.345937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.345967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.346847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.346885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.347845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.347985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.348839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.348990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.349143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.349320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.349520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.349699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.349852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.349895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.350887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.350918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.351962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.352112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.352138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.352287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.450 [2024-07-14 19:06:23.352331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.450 qpair failed and we were unable to recover it. 00:34:35.450 [2024-07-14 19:06:23.352435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.352466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.352609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.352635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.352729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.352755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.352953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.352980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.353900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.353932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.354866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.354986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.355816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.355973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.356953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.356982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.357970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.357997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.358124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.358167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.358324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.358380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.358525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.358551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.358701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.358745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.358886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.358916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.359836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.359866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.360903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.360930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.361871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.361912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.362901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.362928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.451 [2024-07-14 19:06:23.363023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.451 [2024-07-14 19:06:23.363050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.451 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.363222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.363251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.363363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.363390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.363543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.363569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.363729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.363760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.363907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.363933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.364841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.364870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.365908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.365935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.366917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.366943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.367942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.368841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.368891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.369839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.369868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.370958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.370987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.371192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.371316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.371500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.371698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.371809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.371986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.372965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.372996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.373174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.373367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.373531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.373667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.452 [2024-07-14 19:06:23.373819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.452 qpair failed and we were unable to recover it. 00:34:35.452 [2024-07-14 19:06:23.373945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.373971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.374865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.374900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.375919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.375945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.376915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.376941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.377125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.377305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.377460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.377644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.377838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.377975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.378874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.378931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.379110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.379261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.379431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.379634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.379829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.379988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.380133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.380311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.380508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.380708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.380860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.380892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.381923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.381949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.453 qpair failed and we were unable to recover it. 00:34:35.453 [2024-07-14 19:06:23.382858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.453 [2024-07-14 19:06:23.382897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.383091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.383289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.383448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.383640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.383849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.383999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.384961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.384987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.385114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.385141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.385336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.385361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.385486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.385511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.385638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.385666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.385811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.385839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.386867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.386909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.387935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.387962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.388890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.388943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.389840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.389993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.390870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.390995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.391874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.391923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.392928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.392954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.393110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.393136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.393277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.393305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.393447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.393475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.454 qpair failed and we were unable to recover it. 00:34:35.454 [2024-07-14 19:06:23.393618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.454 [2024-07-14 19:06:23.393644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.393732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.393757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.393886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.393916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.394830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.395937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.395964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.396091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.396134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.396314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.396344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.396485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.396512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.396661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.396687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.396846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.396882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.397940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.397967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.398868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.398901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.399869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.399902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.400819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.400970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.401933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.401959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.402866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.402912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.403850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.403884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.404065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.404094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.404207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.404233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.404359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.404385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.455 qpair failed and we were unable to recover it. 00:34:35.455 [2024-07-14 19:06:23.404557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.455 [2024-07-14 19:06:23.404586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.404745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.404773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.404927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.404953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.405800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.405976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.406843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.406992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.407965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.407991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.408171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.408294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.408497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.408642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.408815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.408992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.409177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.409355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.409501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.409656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.409808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.409834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.410944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.410971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.411940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.411969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.412921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.412948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.413828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.413976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.414166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.414355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.414549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.414775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.414936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.414980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.415944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.415971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.416957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.416984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.417919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.417946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.456 [2024-07-14 19:06:23.418065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.456 [2024-07-14 19:06:23.418094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.456 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.418231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.418257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.418410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.418451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.418586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.418615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.418741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.418766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.418944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.418974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.419956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.419983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.420897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.420953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.421896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.421922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.422858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.422889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.423845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.423875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.424940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.424970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.425839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.425867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.426895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.426920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.427922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.427947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.428895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.428925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.429903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.429945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.457 [2024-07-14 19:06:23.430750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.457 qpair failed and we were unable to recover it. 00:34:35.457 [2024-07-14 19:06:23.430894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.430937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.431920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.431963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.432925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.432956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.433891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.434837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.434998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.435151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.435299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.435468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.435615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.435801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.435845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.436830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.436996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.437196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.437459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.437642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.437774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.437970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.437999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.441964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.441989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.442939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.442966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.443933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.443962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.444941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.444967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.445093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.445118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.445250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.445275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.445397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.458 [2024-07-14 19:06:23.445425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.458 qpair failed and we were unable to recover it. 00:34:35.458 [2024-07-14 19:06:23.445585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.445610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.445734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.445760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.445913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.445942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.446858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.446974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.447159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.447383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.447577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.447703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.447843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.447892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.448900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.448927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.449829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.449854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.450901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.450928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.451943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.451969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.452973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.452999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.453864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.453895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.454813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.454997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.455955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.455981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.456873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.456906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.457028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.457204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.457403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.457549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.459 [2024-07-14 19:06:23.457700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.459 qpair failed and we were unable to recover it. 00:34:35.459 [2024-07-14 19:06:23.457807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.457836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.457962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.457988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.458139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.458334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.458479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.458841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.458974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.459883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.459909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.460922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.460949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.461098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.461124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.461294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.461344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.461492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.461518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.461643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.461685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.461833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.461886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.462859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.462983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.463912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.463938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.464895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.464922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.465826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.465851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.466894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.466926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.467941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.467972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.468097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.468130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.468231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.468256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.468418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.468443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.468573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.468605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.460 [2024-07-14 19:06:23.468775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.460 [2024-07-14 19:06:23.468804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.460 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.468955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.468987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.469196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.469323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.469508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.469699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.469835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.469992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.470896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.470927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.471912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.471946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.472840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.472869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.473849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.473884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.474834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.474991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.475860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.475999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.476940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.476967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.477913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.477942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.478901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.478929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.479902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.479936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.480848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.480874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.481869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.481998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.482024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.482149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.461 [2024-07-14 19:06:23.482184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.461 qpair failed and we were unable to recover it. 00:34:35.461 [2024-07-14 19:06:23.482320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.482346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.482434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.482459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.482581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.482610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.482761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.482787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.482884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.482910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.483862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.483900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.484864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.484903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.485851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.485885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.486825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.486850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.487911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.487938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.488820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.488989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.489856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.489889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.490896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.490923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.491971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.491998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.492884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.492985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.493967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.493994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.462 qpair failed and we were unable to recover it. 00:34:35.462 [2024-07-14 19:06:23.494120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.462 [2024-07-14 19:06:23.494156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.494960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.494986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.495834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.495976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.496932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.496963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.497865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.497997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.498927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.498953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.499961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.500906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.500933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.501932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.501957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.502905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.502931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.503901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.503927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.504812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.504837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.505862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.505896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.506042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.506067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.506188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.506213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.506333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.506361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.463 [2024-07-14 19:06:23.506491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.463 [2024-07-14 19:06:23.506516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.463 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.506733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.506761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.506859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.506895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.507905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.507949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.508903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.508992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.509850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.509875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.510805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.510972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.511886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.511915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.512837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.512871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.513899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.513926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.514797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.514825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.515887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.515914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.516783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.516810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.517001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.517031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.517130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.517155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.464 [2024-07-14 19:06:23.517253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.464 [2024-07-14 19:06:23.517279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.464 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.517401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.517426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.517628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.517771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.517799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.517904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.517934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.518891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.518917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.519863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.519913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.520887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.520920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.521950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.521976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.522892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.522988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.523819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.523960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.524792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.524958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.525950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.525978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.526935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.526980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.527933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.527962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.528943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.465 [2024-07-14 19:06:23.528969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.465 qpair failed and we were unable to recover it. 00:34:35.465 [2024-07-14 19:06:23.529116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.529965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.529991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.530938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.530964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.531854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.531885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.532964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.532994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.533138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.533255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.533458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.533642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.533814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.533974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.534948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.534976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.535888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.535932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.536789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.536815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.537852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.537898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.538964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.538990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.539892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.539922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.540058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.540086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.540231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.540257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.540382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.540565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.540593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.466 qpair failed and we were unable to recover it. 00:34:35.466 [2024-07-14 19:06:23.540713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.466 [2024-07-14 19:06:23.540739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.540850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.540880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.541944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.541973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.542862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.542898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.543836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.543997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.544901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.544928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.545948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.545974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.546964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.546989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.547858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.547995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.548895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.548934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.549911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.549940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.550857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.550892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.551856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.551911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.552925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.552951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.553055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.553080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.553229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.553255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.553386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.467 [2024-07-14 19:06:23.553412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.467 qpair failed and we were unable to recover it. 00:34:35.467 [2024-07-14 19:06:23.553547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.553573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.553670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.553695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.553840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.553866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.553968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.554909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.554935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.555853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.555900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.556823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.556996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.557825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.557858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.558935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.558961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.559916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.559942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.560966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.560992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.561889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.561915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.562907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.562938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.563970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.563996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.564884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.564933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.468 [2024-07-14 19:06:23.565880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.468 [2024-07-14 19:06:23.565924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.468 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.566886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.566912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.567968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.567993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.568832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.568983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.569960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.569989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.570950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.570976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.571894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.571920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.572890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.572916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.573793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.573974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.574003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.574123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.574148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.574246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.574403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.574431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.574558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.469 [2024-07-14 19:06:23.574583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.469 qpair failed and we were unable to recover it. 00:34:35.469 [2024-07-14 19:06:23.574711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.574736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.574872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.574906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.575911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.575941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.576871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.576980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.577818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.577859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.578957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.578986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.579951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.579980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.580962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.580991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.581170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.581340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.581515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.581690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.581810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.581986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.582026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.582158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.582184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.582339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.582365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.582605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.582654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.582795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.582821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.582969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.583146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.583332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.583456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.583653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.583858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.583891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.584946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.584972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.585966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.585994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.586124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.586149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.586319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.586350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.586523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.586549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.586696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.586726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.586867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.586921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.587025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.587051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.587177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.470 [2024-07-14 19:06:23.587202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.470 qpair failed and we were unable to recover it. 00:34:35.470 [2024-07-14 19:06:23.587402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.587453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.587597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.587623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.587748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.587774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.587908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.587957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.588104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.588131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.588253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.588278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.588504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.588558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.588686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.588712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.588840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.588865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.589838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.589866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.590947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.590976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.591950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.591976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.592950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.592975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.593967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.593993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.594955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.594997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.595918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.595946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.596840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.596866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.597893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.597938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.598042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.598070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.598185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.598210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.598336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.471 [2024-07-14 19:06:23.598362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.471 qpair failed and we were unable to recover it. 00:34:35.471 [2024-07-14 19:06:23.598508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.598538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.598747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.598776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.598912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.598956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.599086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.599111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.599267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.599293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.599429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.599458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.599600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.599628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.600623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.600657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.600834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.600864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.601050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.601247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.601697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.601843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.601979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.602824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.602971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.603814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.603986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.604814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.604979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.605942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.605984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.606162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.606363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.606483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.606664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.606875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.606988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.607953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.607980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.608951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.608978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.609170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.609342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.609539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.609668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.609835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.609992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.610884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.610909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.611846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.611995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.472 [2024-07-14 19:06:23.612035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.472 qpair failed and we were unable to recover it. 00:34:35.472 [2024-07-14 19:06:23.612172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.612889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.612996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.613902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.613929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.614866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.614899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.615892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.615936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.616933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.616961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.617852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.617886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.618939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.618965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.619904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.619933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.620845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.620871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.621887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.622036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.622061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.622172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.622198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.622295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.473 [2024-07-14 19:06:23.622320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.473 qpair failed and we were unable to recover it. 00:34:35.473 [2024-07-14 19:06:23.622437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.622462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.622560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.622588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.622718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.622743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.622867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.622899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.622997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.623887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.623912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.624889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.624999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.625926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.626868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.626904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.627871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.627905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.628926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.628955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.629945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.629973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.630931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.630959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.631861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.631978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.632949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.632977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.633079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.633105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.633234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.633261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.633386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.474 [2024-07-14 19:06:23.633412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.474 qpair failed and we were unable to recover it. 00:34:35.474 [2024-07-14 19:06:23.633560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.633586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.633711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.633738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.633832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.633858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.633992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.634901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.634927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.635873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.635906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.636907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.636935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.637828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.637986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.638893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.638924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.639890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.639999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.640888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.640926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.641851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.641985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.642888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.642930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.643919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.643947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.644080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.475 [2024-07-14 19:06:23.644107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.475 qpair failed and we were unable to recover it. 00:34:35.475 [2024-07-14 19:06:23.644205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.644393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.644513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.644649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.644792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.644948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.644987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.645948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.645974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.646892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.646938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.647858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.647975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.648816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.648844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.649990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.650030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.650158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.650186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.650312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.650340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.765 [2024-07-14 19:06:23.650515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.765 [2024-07-14 19:06:23.650544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.765 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.650655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.650697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.650855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.650894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.651933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.651959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.652960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.652986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.653868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.653912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.654014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.654042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.654213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.654257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.654460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.654512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.654667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.654718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.654889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.654929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.655029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.655057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.655167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.655197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.655382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.655454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.655675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.655726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.655859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.655897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.656959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.656987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.657928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.657954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.658949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.658988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.659939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.659967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.660104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.660133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.660291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.660320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.660489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.660535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.660690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.660716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.660843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.660869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.661856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.661982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.662122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.662294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.662550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.662719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.662843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.662872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.663841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.663978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.664006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.664170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.664410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.766 [2024-07-14 19:06:23.664673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.766 [2024-07-14 19:06:23.664726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.766 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.664865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.664899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.665033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.665058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.665206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.665236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.665494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.665544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.665767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.665814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.665972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.665999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.666098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.666124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.666251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.666276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.666479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.666539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.666736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.666792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.666891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.666933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.667949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.667975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.668921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.668946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.669890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.669947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.670081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.670108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.670259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.670284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.670449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.670504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.670713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.670766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.670918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.670946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.671886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.671913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.672850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.672883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.673855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.673987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.674184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.674365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.674566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.674729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.674886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.674912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.675871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.675981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.676817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.676971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.677948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.677975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.678097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.678122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.678261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.678286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.678414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.678439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.678586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.767 [2024-07-14 19:06:23.678614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.767 qpair failed and we were unable to recover it. 00:34:35.767 [2024-07-14 19:06:23.678734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.678775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.678914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.678956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.679864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.679988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.680111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.680298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.680446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.680648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.680810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.680844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.681942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.681968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.682875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.682919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.683926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.683950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.684051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.684077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.684194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.684221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.684365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.684418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.684581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.684610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.684834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.684863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.685951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.685977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.686097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.686123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.686274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.686303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.686611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.686663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.686799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.686827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.686963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.686988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.687089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.687114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.687265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.687307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.687565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.687614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.687782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.687811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.687936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.687967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.688164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.688330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.688486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.688696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.688866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.688986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.689173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.689422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.689593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.689763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.689922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.689962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.690068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.690095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.690220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.690246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.690487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.690541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.690749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.690799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.690900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.690927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.691929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.691958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.692148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.692177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.692363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.692406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.692534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.692578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.692701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.692726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.692842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.692890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.693039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.693211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.693242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.693378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.693441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.693660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.693710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.693849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.693883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.694033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.694061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.694235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.694278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.694425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.694469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.694694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.694747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.694846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.694873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.768 [2024-07-14 19:06:23.695051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.768 [2024-07-14 19:06:23.695095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.768 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.695258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.695284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.695431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.695473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.695608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.695634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.695769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.695796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.695914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.695940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.696916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.696945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.697942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.697972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.698941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.698970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.699080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.699108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.699212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.699241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.699448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.699500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.699634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.699699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.699836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.699866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.700905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.700932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.701037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.701075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.701238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.701269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.701403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.701431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.701672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.701719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.701833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.701861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.702888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.702932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.703057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.703083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.703253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.703281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.703474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.703529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.703683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.703741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.703900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.703929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.704047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.704086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.704198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.704242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.704427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.704452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.704670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.704740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.704853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.704893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.705939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.705979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.706098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.706319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.706349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.706513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.706541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.706676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.706704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.706864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.706911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.707071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.707248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.707442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.707722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.707886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.707988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.708108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.708257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.708413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.708693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.708848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.708874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.769 qpair failed and we were unable to recover it. 00:34:35.769 [2024-07-14 19:06:23.709871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.769 [2024-07-14 19:06:23.709902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.710953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.710996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.711142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.711172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.711304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.711333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.711467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.711495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.711629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.711657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.711796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.711830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.712857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.712975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.713868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.713918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.714823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.714976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.715828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.715986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.716943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.716986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.717141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.717167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.717314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.717358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.717607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.717673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.717798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.717829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.717949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.717978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.718179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.718342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.718506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.718717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.718872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.718975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.719113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.719304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.719501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.719678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.719846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.719892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.720967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.720994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.721117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.721159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.721335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.721364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.721540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.721591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.721726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.721755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.721874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.721938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.722124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.722295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.722464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.722589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.722754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.722977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.723963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.723990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.724113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.724139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.724306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.724335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.724499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.724527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.724647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.770 [2024-07-14 19:06:23.724688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.770 qpair failed and we were unable to recover it. 00:34:35.770 [2024-07-14 19:06:23.724825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.724852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.725887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.725930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.726917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.726960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.727863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.728858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.728898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.729950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.729979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.730922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.730951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.731106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.731257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.731462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.731605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.731786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.731952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.732915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.732944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.733853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.733886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.734811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.735844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.735982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.736134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.736336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.736534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.736672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.736845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.736874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.737910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.737936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.771 [2024-07-14 19:06:23.738847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.771 [2024-07-14 19:06:23.738883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.771 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.739844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.739869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.740863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.740996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.741179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.741325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.741474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.741661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.741839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.741866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.742870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.742926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.743897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.743927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.744955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.744982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.745159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.745391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.745609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.745756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.745885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.745991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.746902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.746958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.747966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.747994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.748928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.748956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.749081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.749107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.749210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.749236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.749441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.749495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.749649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.749676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.749813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.749857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.750900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.750930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.751804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.751830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.752827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.752869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.753891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.753940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.772 [2024-07-14 19:06:23.754093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.772 [2024-07-14 19:06:23.754119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.772 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.754228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.754255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.754349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.754375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.754499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.754525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.754619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.754646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.754849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.754894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.755940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.755968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.756116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.756141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.756293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.756336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.756554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.756607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.756757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.756784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.756958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.756988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.757945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.757971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.758858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.758895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.759858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.759894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.760891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.760930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.761091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.761275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.761504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.761669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.761847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.761983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.762136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.762278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.762421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.762688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.762850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.762887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.763867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.763903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.764827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.764976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.765947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.765974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.766126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.766444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.766637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.766820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.766975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.767092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.767265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.767431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.767658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.767783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.767811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.768041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.768196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.768408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.768666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.768841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.773 qpair failed and we were unable to recover it. 00:34:35.773 [2024-07-14 19:06:23.768976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.773 [2024-07-14 19:06:23.769002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.769954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.769979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.770103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.770128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.770276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.770302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.770446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.770471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.770588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.770618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.770812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.770841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.771893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.771919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.772846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.772872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.773000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.773025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.773189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.773258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.773490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.773562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.773679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.773707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.773843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.773872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.774945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.774971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.775897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.775936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.776908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.776950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.777054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.777079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.777173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.777219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.777354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.777403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.777642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.777694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.777837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.777867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.778028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.778059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.778199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.778238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.778510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.778560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.778680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.778722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.778874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.778907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.779016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.779042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.779196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.779240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.779388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.779435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.779681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.779735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.779837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.779865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.780002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.780028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.780136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.780164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.780418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.780468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.780666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.780726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.780848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.780875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.781941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.781979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.782940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.782978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.783088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.783115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.783361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.783409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.783607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.783658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.783794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.783822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.783995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.784020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.784143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.784185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.784344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.784405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.784617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.784645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.784778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.784975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.785002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.785094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.785119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.785220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.785247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.785340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.774 [2024-07-14 19:06:23.785383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.774 qpair failed and we were unable to recover it. 00:34:35.774 [2024-07-14 19:06:23.785542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.785612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.785744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.785772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.785865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.786043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.786069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.786235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.786290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.786458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.786488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.786693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.786722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.786838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.786867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.787897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.788873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.788923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.789875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.789909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.790853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.790886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.791969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.791995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.792941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.792968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.793152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.793340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.793535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.793684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.793837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.793979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.794964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.794990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.795160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.795296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.795471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.795659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.795852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.795980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.796178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.796332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.796540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.796702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.796846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.796871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.797901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.797928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.798916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.798942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.799062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.799088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.799305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.799362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.799592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.775 [2024-07-14 19:06:23.799620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.775 qpair failed and we were unable to recover it. 00:34:35.775 [2024-07-14 19:06:23.799752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.799780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.799905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.799947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.800818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.800846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.801823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.801850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.802887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.802913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.803895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.803937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.804931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.804958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.805109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.805296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.805525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.805717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.805862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.805976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.806102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.806265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.806539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.806735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.806902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.806929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.807871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.807911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.808937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.808976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.809972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.809998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.810149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.810174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.810270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.810295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.810420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.810445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.810632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.810681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.810813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.810842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.811901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.811928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.812899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.812996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.813164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.813378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.813553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.813732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.813864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.813897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.814052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.814080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.776 qpair failed and we were unable to recover it. 00:34:35.776 [2024-07-14 19:06:23.814225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.776 [2024-07-14 19:06:23.814253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.814380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.814426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.814572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.814600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.814712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.814740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.814881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.814925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.815920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.815948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.816891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.816918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.817058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.817356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.817699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.817874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.817984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.818128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.818282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.818441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.818634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.818821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.818860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.819840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.819866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.820820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.820989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.821874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.821998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.822838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.822991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.823941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.823967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.824907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.824933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.825843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.825869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.826906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.826948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.827098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.827334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.827527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.827681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.827839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.827983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.828850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.828972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.829014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.829128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.829156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.829251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.777 [2024-07-14 19:06:23.829279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.777 qpair failed and we were unable to recover it. 00:34:35.777 [2024-07-14 19:06:23.829474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.829502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.829633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.829661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.829761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.829789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.829953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.829984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.830920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.830946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.831885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.831911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.832967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.833115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.833269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.833500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.833667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.833841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.833870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.834020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.834045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.834169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.834210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.834440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.834497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.834725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.834753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.834862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.834900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.835874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.835906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.836845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.837823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.837973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.838952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.838996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.839941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.839972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.840934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.840961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.841959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.841989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.842861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.842898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.843883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.843990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.844016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.844151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.778 [2024-07-14 19:06:23.844177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.778 qpair failed and we were unable to recover it. 00:34:35.778 [2024-07-14 19:06:23.844279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.844410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.844528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.844683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.844800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.844918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.844944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.845920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.845946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.846973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.846999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.847095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.847120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.847252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.847277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.847429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.847486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.847702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.847754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.847909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.847936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.848855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.848886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.849930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.849960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.850924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.850954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.851944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.851990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.852182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.852211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.852401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.852444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.852584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.852649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.852797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.852824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.852991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.853930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.853960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.854885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.854928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.855928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.855955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.856955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.856982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.857890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.857916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.779 qpair failed and we were unable to recover it. 00:34:35.779 [2024-07-14 19:06:23.858052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.779 [2024-07-14 19:06:23.858081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.858938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.858966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.859936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.859963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.860899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.860925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.861834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.861975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.862896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.862928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.863815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.863842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.864912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.864938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.865991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.866852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.866980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.867830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.867858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.868951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.869854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.869889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.870888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.870914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.780 qpair failed and we were unable to recover it. 00:34:35.780 [2024-07-14 19:06:23.871019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.780 [2024-07-14 19:06:23.871044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.871896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.871923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.872886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.872992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.873858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.873891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.874872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.874903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.875834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.875861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.876865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.876901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.877873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.877904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.878882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.878982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.879868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.781 [2024-07-14 19:06:23.879904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.781 qpair failed and we were unable to recover it. 00:34:35.781 [2024-07-14 19:06:23.880036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.880932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.880959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.881871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.881903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.882970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.882996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.883134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.883322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.883488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.883649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.883837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.883984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.884901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.884926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.885969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.885996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.886142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.886184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.886359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.886402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.886539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.886582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.886732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.886757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.886887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.886914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.887900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.887926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.888923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.888963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.889859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.889895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.890895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.890921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.891058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.891291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.891483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.891639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.891816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.891992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.892169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.892400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.892559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.892729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.892896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.892939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.893089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.893114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.893251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.893292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.893409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.893438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.893605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.782 [2024-07-14 19:06:23.893633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.782 qpair failed and we were unable to recover it. 00:34:35.782 [2024-07-14 19:06:23.893771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.893799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.893915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.893942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.894952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.894978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.895911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.895937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.896952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.896981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.897965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.897991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.898945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.898971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.899096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.899121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.899310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.899338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.899574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.899602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.899737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.899765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.899901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.899945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.900855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.900986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.901204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.901373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.901595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.901772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.901948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.901993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.902931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.902957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.903927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.903953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.904902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.904929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.905028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.783 [2024-07-14 19:06:23.905053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.783 qpair failed and we were unable to recover it. 00:34:35.783 [2024-07-14 19:06:23.905205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.905230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.905390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.905433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.905569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.905597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.905715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.905743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.905899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.905939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.906082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.906111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.906283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.906313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.906540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.906597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.906736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.906762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.906887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.906914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.907914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.907940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.908931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.908974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.909869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.909998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.910889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.910915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.911894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.911920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.912829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.912984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.913181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.913378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.913567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.913724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.913873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.913905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.914861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.914893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.915956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.915984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.916915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.916941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.917886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.917912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.918011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.918037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.784 [2024-07-14 19:06:23.918179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.784 [2024-07-14 19:06:23.918222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.784 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.918366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.918408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.918526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.918568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.918698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.918723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.918851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.918881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.919920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.919946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.920952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.920977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.921123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.921336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.921468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.921653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.921843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.921975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.922918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.922943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.923956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.923982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.924157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.924356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.924547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.924711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.924859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.924992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.925890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.925916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.926938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.926964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.927849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.927874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.928894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.928922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.929034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.929077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.929198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.929242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.785 [2024-07-14 19:06:23.929389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.785 [2024-07-14 19:06:23.929433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.785 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.929561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.929605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.929703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.929729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.929883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.929926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.930933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.930960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.931858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.931897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.932871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.932904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.933032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.933057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.933197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.933225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.933488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.933542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.933665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.933690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.933844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.933872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.934961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.934988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.935089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.935114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.935212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.935237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.935349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.935377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.935540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.935567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.786 [2024-07-14 19:06:23.935696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.786 [2024-07-14 19:06:23.935724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.786 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.935902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.935944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.936939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.936965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.937813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.937967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.938164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.938336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.938551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.938727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.938882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.938909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.939872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.939906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.940890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.940917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.941953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.941979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.787 qpair failed and we were unable to recover it. 00:34:35.787 [2024-07-14 19:06:23.942110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.787 [2024-07-14 19:06:23.942135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.942297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.942322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.942416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.942441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.942556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.942584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.942705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.942747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.942890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.942934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.943952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.943980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.944106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.944132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.944275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.944304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.944483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.944513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.944730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.944786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.944888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.944917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.945073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.945100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.945300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.945356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.945528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.945573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.945733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.945795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.945919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.945949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.946948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.946976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.947121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.947165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.947336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.947541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.947596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.947693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.947719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.947835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.947874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.948868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.788 [2024-07-14 19:06:23.948901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.788 qpair failed and we were unable to recover it. 00:34:35.788 [2024-07-14 19:06:23.949038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.949238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.949424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.949587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.949766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.949920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.949947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.950066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.950092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.950259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.950288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.950477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.950505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.950645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.950680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.950799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.950842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.951034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.951073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.951198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.951236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.951371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.951416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.951605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.951658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.951842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.951870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.952872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.952906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.953072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.953279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.953527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.953691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.953859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.953994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.954189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.954335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.954503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.954663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.954840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.954866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.955029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.955056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.955178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.955205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.955297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.955322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.955419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.955446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.959009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.959049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.959211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.789 [2024-07-14 19:06:23.959239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.789 qpair failed and we were unable to recover it. 00:34:35.789 [2024-07-14 19:06:23.959396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.959425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.959594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.959623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.959789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.959817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.959972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.959998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.960152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.960317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.960489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.960717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.960853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.960994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.961939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.961965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.962859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.962926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.963867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.963926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.964851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.790 [2024-07-14 19:06:23.964885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.790 qpair failed and we were unable to recover it. 00:34:35.790 [2024-07-14 19:06:23.965043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.965157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.965305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.965464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.965669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.965858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.965891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.966012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.966054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:35.791 [2024-07-14 19:06:23.966212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.791 [2024-07-14 19:06:23.966241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:35.791 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.966354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.966380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.966493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.966522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.966628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.966658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.966765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.966794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.966953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.966992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.967146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.967195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.967345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.967389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.967529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.967558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.967708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.967734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.967906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.967945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.968961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.968987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.969849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.969886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.970944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.970973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.971093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.971119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.971249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.971274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.074 [2024-07-14 19:06:23.971380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.074 [2024-07-14 19:06:23.971424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.074 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.971536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.971564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.971677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.971703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.971837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.971863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.971995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.972130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.972343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.972543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.972689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.972816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.972843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.973969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.973995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.974120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.974163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.974331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.974365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.974537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.974566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.974728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.974757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.974892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.974918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.975884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.975987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.976947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.976973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.075 [2024-07-14 19:06:23.977918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.075 [2024-07-14 19:06:23.977945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.075 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.978897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.978923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.979788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.979816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.980890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.980934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.981846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.981880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.982835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.982974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.983109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.983320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.983586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.983768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.983951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.983977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.984068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.984210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.984235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.984381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.984424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.984530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.076 [2024-07-14 19:06:23.984559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.076 qpair failed and we were unable to recover it. 00:34:36.076 [2024-07-14 19:06:23.984760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.984787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.984933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.984959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.985863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.985900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.986833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.986862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.987910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.987938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.988849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.989968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.989995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.990898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.990926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.077 qpair failed and we were unable to recover it. 00:34:36.077 [2024-07-14 19:06:23.991077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.077 [2024-07-14 19:06:23.991119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.991266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.991291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.991444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.991469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.991565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.991607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.991715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.991743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.991890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.991916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.992863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.992895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.993853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.993884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.994973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.994999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.995943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.995972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.996120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.996145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.996269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.996295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.996390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.996416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.078 qpair failed and we were unable to recover it. 00:34:36.078 [2024-07-14 19:06:23.996592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.078 [2024-07-14 19:06:23.996621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.996757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.996783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.996888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.996914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.997856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.997887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.998966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.998995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.999166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.999312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.999491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.999662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:23.999816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:23.999988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.000961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.000988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.001906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.001950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.002916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.002946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.079 qpair failed and we were unable to recover it. 00:34:36.079 [2024-07-14 19:06:24.003094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.079 [2024-07-14 19:06:24.003120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.003242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.003268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.003399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.003429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.003604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.003630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.003730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.003756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.003936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.003967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.004823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.004960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.005939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.005966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.006961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.006987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.007886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.007915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.008832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.008984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.009011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.009165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.009204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.009339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.009366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.009495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.080 [2024-07-14 19:06:24.009521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.080 qpair failed and we were unable to recover it. 00:34:36.080 [2024-07-14 19:06:24.009632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.009661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.009808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.009835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.009966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.009992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.010172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.010374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.010541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.010703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.010863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.010976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.011118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.011321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.011451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.011631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.011829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.011855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.012024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.012063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.012202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.012259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.012414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.012441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.012648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.012702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.012841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.012870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.013789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.013826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.014831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.014988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.015869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.015979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.016006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.016128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.081 [2024-07-14 19:06:24.016170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.081 qpair failed and we were unable to recover it. 00:34:36.081 [2024-07-14 19:06:24.016296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.016323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.016454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.016480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.016601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.016626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.016768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.016796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.016937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.016963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.017889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.017916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.018939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.018965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.019865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.019899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.020021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.020046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.020195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.020220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.020397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.020457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.020674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.020728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.020851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.020883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.021871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.021902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.022030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.022057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.082 qpair failed and we were unable to recover it. 00:34:36.082 [2024-07-14 19:06:24.022196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.082 [2024-07-14 19:06:24.022226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.022378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.022403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.022552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.022596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.022707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.022741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.022863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.022895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.023942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.023968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.024946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.024972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.025874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.025906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.026822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.026970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.027958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.027985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.028139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.028164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.028271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.028297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.028424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.028449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.028576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.083 [2024-07-14 19:06:24.028602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.083 qpair failed and we were unable to recover it. 00:34:36.083 [2024-07-14 19:06:24.028696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.028738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.028848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.028887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.029922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.029948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.030933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.030959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.031884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.031927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.032928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.032955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.033950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.033978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.084 [2024-07-14 19:06:24.034884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.084 [2024-07-14 19:06:24.034928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.084 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.035825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.035853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.036809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.036983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.037843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.037992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.038966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.038994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.039909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.039935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.040937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.040963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.041067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.041092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.041238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.041266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.041414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.041440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.085 qpair failed and we were unable to recover it. 00:34:36.085 [2024-07-14 19:06:24.041527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.085 [2024-07-14 19:06:24.041553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.041670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.041698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.041856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.041889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.042946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.042973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.043839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.043866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.044866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.044902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.045971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.045997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.046915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.046954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.047902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.086 [2024-07-14 19:06:24.047946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.086 qpair failed and we were unable to recover it. 00:34:36.086 [2024-07-14 19:06:24.048100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.048263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.048452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.048638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.048768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.048971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.049917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.049942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.050884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.050912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.051899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.051943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.052920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.087 qpair failed and we were unable to recover it. 00:34:36.087 [2024-07-14 19:06:24.053951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.087 [2024-07-14 19:06:24.053980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.054904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.054933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.055895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.055923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.056944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.056972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.057130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.057341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.057507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.057702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.057840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.057984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.058168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.058340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.058493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.058635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.058806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.058850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.059927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.059954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.060111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.060137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.060296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.060322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.060598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.060654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.088 qpair failed and we were unable to recover it. 00:34:36.088 [2024-07-14 19:06:24.060755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.088 [2024-07-14 19:06:24.060781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.060922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.060952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.061929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.061955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.062092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.062121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.062280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.062308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.062447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.062475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.062672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.062744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.062893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.062920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.063138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.063263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.063506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.063659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.063838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.063991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.064177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.064340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.064591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.064775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.064920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.064951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.065062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.065090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.065256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.065283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.065405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.065476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.065667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.065712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.065847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.065883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.066846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.066873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.067916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.089 [2024-07-14 19:06:24.067959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.089 qpair failed and we were unable to recover it. 00:34:36.089 [2024-07-14 19:06:24.068081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.068243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.068408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.068577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.068781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.068936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.069106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.069150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.069401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.069464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.069581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.069612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.069753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.069780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.069928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.069967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.070103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.070130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.070262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.070288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.070497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.070561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.070742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.070784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.070932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.070958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.071086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.071112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.071263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.071292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.071455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.071483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.071696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.071753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.071863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.071898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.072817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.072845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.073041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.073081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.073236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.073281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.073456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.073499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.073644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.073688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.073815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.073841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.074973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.074999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.075101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.075127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.090 qpair failed and we were unable to recover it. 00:34:36.090 [2024-07-14 19:06:24.075262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.090 [2024-07-14 19:06:24.075291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.075428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.075457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.075558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.075586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.075712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.075740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.075843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.075871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.075988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.076141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.076282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.076495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.076700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.076867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.076900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.077079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.077334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.077506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.077671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.077835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.077992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.078018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.078164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.078193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.078453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.078502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.078655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.078680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.078813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.078839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.078984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.079890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.079992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.080121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.080325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.080458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.080600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.080848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.080885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.081054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.081080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.081178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.081209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.091 qpair failed and we were unable to recover it. 00:34:36.091 [2024-07-14 19:06:24.081377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.091 [2024-07-14 19:06:24.081405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.081508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.081549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.081677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.081706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.081842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.081867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.082888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.082928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.083104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.083255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.083450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.083634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.083842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.083999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.084145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.084313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.084561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.084753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.084884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.084910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.085866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.085910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.086842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.086870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.087871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.087908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.092 [2024-07-14 19:06:24.088085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.092 [2024-07-14 19:06:24.088128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.092 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.088281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.088328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.088435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.088464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.088639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.088669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.088832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.088860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.088992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.089177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.089319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.089479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.089710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.089883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.089929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.090896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.090922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.091068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.091112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.091378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.091430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.091582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.091624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.091777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.091803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.091982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.092961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.092987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.093971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.093997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.094971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.093 [2024-07-14 19:06:24.094997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.093 qpair failed and we were unable to recover it. 00:34:36.093 [2024-07-14 19:06:24.095093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.095280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.095433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.095606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.095765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.095944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.095971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.096966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.096996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.097849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.097884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.098867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.098986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.099899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.099924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.100842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.100866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.101004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.101034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.094 [2024-07-14 19:06:24.101182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.094 [2024-07-14 19:06:24.101210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.094 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.101343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.101370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.101502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.101530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.101697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.101742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.101865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.101897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.102055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.102081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.102263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.102307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.102452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.102496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.102670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.102867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.102900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.103904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.103930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.104851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.104993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.105169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.105381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.105579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.105747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.105901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.105928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.106864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.106896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.107033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.095 [2024-07-14 19:06:24.107073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.095 qpair failed and we were unable to recover it. 00:34:36.095 [2024-07-14 19:06:24.107224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.107252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.107386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.107413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.107542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.107570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.107717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.107743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.107869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.107899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.107992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.108169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.108356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.108491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.108702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.108847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.109952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.109983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.110847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.110999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.111188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.111404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.111573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.111715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.111846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.111874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.112874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.113000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.113026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.113168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.113212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.113326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.113371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.113515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.096 [2024-07-14 19:06:24.113544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.096 qpair failed and we were unable to recover it. 00:34:36.096 [2024-07-14 19:06:24.113681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.113707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.113808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.113834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.113953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.113982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.114126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.114165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.114315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.114354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.114552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.114604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.114732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.114758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.114856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.114887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.115912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.115938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.116868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.116984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.117158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.117348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.117480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.117711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.117866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.117898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.118874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.118924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.119932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.119959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.120050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.120190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.120219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.097 [2024-07-14 19:06:24.120348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.097 [2024-07-14 19:06:24.120376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.097 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.120512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.120540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.120730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.120775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.120936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.120963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.121129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.121167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.121311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.121341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.121505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.121533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.121734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.121763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.121921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.121949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.122890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.122935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.123065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.123091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.123243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.123272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.123478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.123535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.123677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.123705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.123886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.123913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.124852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.124979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.125104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.125295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.125486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.125677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.125852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.125884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.126961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.127078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.127104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.127204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.098 [2024-07-14 19:06:24.127230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.098 qpair failed and we were unable to recover it. 00:34:36.098 [2024-07-14 19:06:24.127384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.127429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.127575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.127626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.127731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.127759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.127904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.127948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.128807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.128835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.129039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.129196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.129400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.129564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.129837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.129993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.130871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.130975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.131854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.131884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.132913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.132939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.099 qpair failed and we were unable to recover it. 00:34:36.099 [2024-07-14 19:06:24.133917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.099 [2024-07-14 19:06:24.133960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.134049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.134074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.134263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.134319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.134459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.134523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.134683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.134711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.134857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.134890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.135945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.135970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.136863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.136898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.137855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.137996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.138184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.138375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.138580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.138747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.138957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.139126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.139151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.139304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.139329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.139487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.139513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.139715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.139743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.139873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.100 [2024-07-14 19:06:24.139922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.100 qpair failed and we were unable to recover it. 00:34:36.100 [2024-07-14 19:06:24.140023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.140959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.140986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.141889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.142948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.142975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.143954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.143980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.144957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.144983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.145869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.145992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.146017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.146120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.146145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.101 qpair failed and we were unable to recover it. 00:34:36.101 [2024-07-14 19:06:24.146261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.101 [2024-07-14 19:06:24.146288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.146398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.146425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.146560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.146589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.146726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.146753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.146860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.146907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.147920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.147946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.148852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.148894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.149855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.149886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.150858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.150892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.151868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.151907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 19:06:24.152778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.102 [2024-07-14 19:06:24.152805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.152925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.152950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.153885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.153914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.154858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.154987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.155948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.155988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.156885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.156925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.157960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.157986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.158888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.158917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 19:06:24.159046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.103 [2024-07-14 19:06:24.159085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.159222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.159248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.159377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.159403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.159537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.159563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.159749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.159774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.159925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.159951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.160945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.160971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.161960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.161986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.162950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.162989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.163953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.163979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.164083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.164109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.164199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.164225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.164351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.164378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.164502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.104 [2024-07-14 19:06:24.164532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 19:06:24.164688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.164714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.164805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.164830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.164956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.164982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.165953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.165979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.166928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.166955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.167944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.167970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.168771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.168815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.169890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.169916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.105 qpair failed and we were unable to recover it. 00:34:36.105 [2024-07-14 19:06:24.170842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.105 [2024-07-14 19:06:24.170871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.171857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.171892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.172952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.172978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.173941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.173966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.174848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.174980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.175835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.175991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.106 [2024-07-14 19:06:24.176767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.106 qpair failed and we were unable to recover it. 00:34:36.106 [2024-07-14 19:06:24.176897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.176922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.177915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.177943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.178815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.178840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.179962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.179987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.180866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.180907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.181941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.181967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.182884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.182912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.183060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.107 [2024-07-14 19:06:24.183084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.107 qpair failed and we were unable to recover it. 00:34:36.107 [2024-07-14 19:06:24.183185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.183210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.183324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.183351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.183488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.183517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.183676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.183702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.183821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.183846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.184855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.184999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.185925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.185950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.186915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.186940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.187918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.187946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.188109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.188277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.188432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.188617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.188805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.108 [2024-07-14 19:06:24.188993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.108 [2024-07-14 19:06:24.189019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.108 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.189162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.189191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.189381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.189435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.189583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.189607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.189774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.189805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.189954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.189979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.190886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.191945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.191973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.192914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.192940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.193889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.193914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.194844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.194869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.195022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.195049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.195159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.195185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.195329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.195354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.109 qpair failed and we were unable to recover it. 00:34:36.109 [2024-07-14 19:06:24.195454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.109 [2024-07-14 19:06:24.195478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.195608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.195633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.195735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.195760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.195918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.195944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.196834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.196862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.197828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.197873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.198852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.198883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.199958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.199983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.200107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.200242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.200425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.200607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.200785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.200952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.201928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.201954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.202052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.202077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.110 qpair failed and we were unable to recover it. 00:34:36.110 [2024-07-14 19:06:24.202168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.110 [2024-07-14 19:06:24.202209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.202346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.202373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.202497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.202521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.202622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.202648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.202801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.202828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.202954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.202979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.203884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.203912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.204894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.204934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.205832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.205860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.206930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.206956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.207843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.207870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.208022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.208046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.208141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.208167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.208268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.208293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.111 qpair failed and we were unable to recover it. 00:34:36.111 [2024-07-14 19:06:24.208397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.111 [2024-07-14 19:06:24.208439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.208553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.208578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.208767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.208796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.208935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.208962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.209935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.209962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.210885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.210909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.211856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.211887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.212913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.212942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.213874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.213905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.214950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.214979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.112 qpair failed and we were unable to recover it. 00:34:36.112 [2024-07-14 19:06:24.215090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.112 [2024-07-14 19:06:24.215115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.215238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.215263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.215430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.215458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.215608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.215636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.215762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.215789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.215918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.215961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.216888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.216915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.217875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.217905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.218863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.218999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.219961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.219990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.220117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.220142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.220324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.220349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.220472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.220514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.220645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.220671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.113 [2024-07-14 19:06:24.220806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.113 [2024-07-14 19:06:24.220834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.113 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.220996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.221860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.221990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.222864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.222896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.223887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.223915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.224849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.224874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.225909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.225951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.226960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.226985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.227136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.227179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.227290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.114 [2024-07-14 19:06:24.227317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.114 qpair failed and we were unable to recover it. 00:34:36.114 [2024-07-14 19:06:24.227433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.227457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.227574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.227599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.227721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.227747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.227853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.227889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.228931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.228960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.229860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.229893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.230934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.230963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.231884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.231929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.232854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.232895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.115 qpair failed and we were unable to recover it. 00:34:36.115 [2024-07-14 19:06:24.233855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.115 [2024-07-14 19:06:24.233903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.234888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.234916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.235871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.235922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.236916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.236943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.237919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.237947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.238927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.238952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.239890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.239917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.240009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.240035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.116 qpair failed and we were unable to recover it. 00:34:36.116 [2024-07-14 19:06:24.240134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.116 [2024-07-14 19:06:24.240159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.240279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.240304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.240449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.240474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.240589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.240632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.240750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.240777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.240889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.240918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.241967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.241995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.242905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.242930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.243850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.243884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.244866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.244899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.117 qpair failed and we were unable to recover it. 00:34:36.117 [2024-07-14 19:06:24.245799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.117 [2024-07-14 19:06:24.245827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.245942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.245968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.246111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.246288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.246478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.246636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.246858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.246986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.247866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.247996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.248903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.248933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.249900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.249929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.250939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.250968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.251874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.251999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.252168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.252339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.252482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.252684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.252849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.252882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.253959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.253988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.254871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.254914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.255060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.255085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.255222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.255249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.255382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.255407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.255555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.255596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.118 qpair failed and we were unable to recover it. 00:34:36.118 [2024-07-14 19:06:24.255706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.118 [2024-07-14 19:06:24.255733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.255829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.255857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.256844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.256982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.257171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.257345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.257496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.257675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.257842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.257870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.258837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.258909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.259857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.259890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.260890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.260932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.261889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.261917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.262919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.262946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.263873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.263906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.264849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.264897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.265838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.265863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.119 [2024-07-14 19:06:24.266852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.119 [2024-07-14 19:06:24.266886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.119 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.267908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.267937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.268888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.268993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.269887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.269915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.270863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.270979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.271889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.271916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.272952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.272980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.273846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.273973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.120 [2024-07-14 19:06:24.274961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.120 [2024-07-14 19:06:24.274987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.120 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.275922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.275947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.276853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.276886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.277846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.277874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.403 [2024-07-14 19:06:24.278732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.403 [2024-07-14 19:06:24.278773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.403 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.278939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.278967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.279888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.279914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.280853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.280885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.281889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.281933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.282965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.282991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.283935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.283964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.284911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.284937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.285875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.285909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.286808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.286833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.287913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.287943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.288821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.288849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.289847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.289871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.404 [2024-07-14 19:06:24.290924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.404 [2024-07-14 19:06:24.290950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.404 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.291903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.291928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.292930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.292959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.293937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.293962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.294844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.294976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.295921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.295948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.296894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.296920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.297848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.297882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.298922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.298950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.299880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.299905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.300859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.300999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.301886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.405 [2024-07-14 19:06:24.301927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.405 qpair failed and we were unable to recover it. 00:34:36.405 [2024-07-14 19:06:24.302064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.302217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.302389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.302516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.302655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.302800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.302830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.303928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.303953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.304887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.304921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.305894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.305922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.306915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.306940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.307844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.307981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.308913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.308938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.309939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.309968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.310835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.310983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.311900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.311925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.312901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.312929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.313854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.314007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.314031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.314181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.314207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.314312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.406 [2024-07-14 19:06:24.314340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.406 qpair failed and we were unable to recover it. 00:34:36.406 [2024-07-14 19:06:24.314507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.314531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.314656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.314680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.314776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.314800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.314913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.314955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.315902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.315927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.316893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.316921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.317845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.317870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.318883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.319954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.319980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.320945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.320971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.321863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.321898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.322859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.322994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.323931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.323957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.324856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.324993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.325192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.325344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.325562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.325698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.325871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.325903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.326034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.326075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.326215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.326242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.326390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.326415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.407 qpair failed and we were unable to recover it. 00:34:36.407 [2024-07-14 19:06:24.326570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.407 [2024-07-14 19:06:24.326599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.326744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.326772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.326919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.326948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.327903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.327928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.328831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.328858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.329838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.329863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.330925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.330951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.331888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.331914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.332837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.332889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.333862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.333896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.334943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.334968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.335917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.335942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.336851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.336890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.337050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.337211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.337375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.337566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.408 [2024-07-14 19:06:24.337703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.408 qpair failed and we were unable to recover it. 00:34:36.408 [2024-07-14 19:06:24.337849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.337881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.337989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.338159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.338313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.338511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.338683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.338822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.338850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.339831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.339855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.340889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.340918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.341861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.341893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.342886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.342913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.343871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.343908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.344828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.344857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.345805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.345990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.346161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.346368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.346537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.346697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.346835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.346863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.347845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.347896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.348884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.348913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.349881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.349978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.350004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.350168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.350197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.350349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.409 [2024-07-14 19:06:24.350374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.409 qpair failed and we were unable to recover it. 00:34:36.409 [2024-07-14 19:06:24.350474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.350501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.350648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.350677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.350810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.350839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.350964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.350991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.351894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.351919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.352886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.352912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.353969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.353998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.354892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.354937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.355857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.355899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.356905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.356931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.357874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.358840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.358977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.359949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.359975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.360848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.360993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.410 [2024-07-14 19:06:24.361852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.410 qpair failed and we were unable to recover it. 00:34:36.410 [2024-07-14 19:06:24.361987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.362862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.362982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.363941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.363971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.364930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.364956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.365899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.366900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.366999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.367905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.367933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.368870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.368925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.369888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.369919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.370884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.370911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.371890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.371929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.372863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.372900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.373845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.373885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.374039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.411 [2024-07-14 19:06:24.374064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.411 qpair failed and we were unable to recover it. 00:34:36.411 [2024-07-14 19:06:24.374202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.374418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.374535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.374655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.374777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.374939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.374969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.375942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.375968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.376902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.376932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.377888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.377914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.378834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.379940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.379970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.380863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.380898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.381907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.381952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.382114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.382279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.382469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.382729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.382870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.382999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.383921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.383950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.384898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.384941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.385079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.385108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.385361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.385416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.385527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.412 [2024-07-14 19:06:24.385553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.412 qpair failed and we were unable to recover it. 00:34:36.412 [2024-07-14 19:06:24.385705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.385730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.385844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.385873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.386948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.386973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.387919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.387951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.388839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.388993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.389868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.389900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.390022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.390048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.390163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.390192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.390408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.390460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.390633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.390658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.390835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.390864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.391960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.391988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.392820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.392846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.393936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.394892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.394922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.395887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.395914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.396845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.396872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.397933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.397960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.398054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.398080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.413 qpair failed and we were unable to recover it. 00:34:36.413 [2024-07-14 19:06:24.398261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.413 [2024-07-14 19:06:24.398290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.398453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.398654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.398681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.398773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.398799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.398912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.398941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.399949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.399976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.400933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.400962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.401962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.401988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.402903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.402934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.403091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.403117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.403236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.403278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.403482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.403534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.403644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.403675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.403821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.404954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.404980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.405950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.405978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.406138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.406330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.406491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.406694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.406810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.406986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.407956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.407982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.408955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.408985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.409914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.409941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.410066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.410093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.410276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.410342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.410513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.410542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.410687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.410713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.410840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.410866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.414 [2024-07-14 19:06:24.411025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.414 [2024-07-14 19:06:24.411058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.414 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.411199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.411259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.411437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.411463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.411551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.411577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.411723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.411753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.411933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.411963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.412079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.412105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.412236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.412262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.412440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.412469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.412605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.412633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.412859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.412898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.413070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.413096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.413258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.413287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.413504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.413558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.413846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.413915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.414918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.414948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.415919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.415949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.416823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.416848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.417931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.417961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.418916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.418942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.419916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.419960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.420960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.420988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.421085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.421111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.421347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.421405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.421523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.421551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.421702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.421728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.421886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.421916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.422950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.422977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.423064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.415 [2024-07-14 19:06:24.423090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.415 qpair failed and we were unable to recover it. 00:34:36.415 [2024-07-14 19:06:24.423264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.423293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.423417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.423446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.423586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.423613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.423743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.423770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.423927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.423958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.424160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.424335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.424504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.424677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.424845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.424975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.425152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.425375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.425540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.425711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.425859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.425892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.426042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.426076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.426311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.426360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.426503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.426529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.426680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.426722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.426865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.426896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.427924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.427950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.428850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.428881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.429953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.429982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.430936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.430963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.431058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.431084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.431301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.431359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.431539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.431565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.431690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.431715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.431891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.431921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.432888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.432915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.433920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.433958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.434886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.434912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.416 [2024-07-14 19:06:24.435780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.416 [2024-07-14 19:06:24.435808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.416 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.435920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.435949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.436861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.436898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.437868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.438874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.438909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.439846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.439994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.440932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.440963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.441929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.441955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.442869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.442997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.443937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.443971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.444899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.444926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.445963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.445992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.446872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.446907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.417 qpair failed and we were unable to recover it. 00:34:36.417 [2024-07-14 19:06:24.447056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.417 [2024-07-14 19:06:24.447085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.447216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.447244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.447388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.447414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.447576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.447617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.447751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.447779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.447918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.447948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.448888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.448932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.449915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.449941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.450955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.450981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.451884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.451991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.452963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.452988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.453855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.453893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.454933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.454960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.455909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.455937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.456828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.456856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.457856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.457888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.458865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.458897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.418 qpair failed and we were unable to recover it. 00:34:36.418 [2024-07-14 19:06:24.459002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.418 [2024-07-14 19:06:24.459028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.459899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.459993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.460168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.460334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.460465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.460640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.460828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.460857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.461956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.461982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.462182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.462249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.462385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.462413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.462565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.462591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.462727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.462753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.462915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.462945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.463865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.463895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.464957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.464987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.465939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.465964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.466872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.466905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.467930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.467957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.468870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.468983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.469179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.469305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.469471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.469645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.469873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.469907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.419 [2024-07-14 19:06:24.470708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.419 [2024-07-14 19:06:24.470750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.419 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.470865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.470916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.471906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.472958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.472984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.473889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.473919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.474918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.474962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.475930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.475956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.476918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.476945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.477130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.477312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.477485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.477680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.477843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.477982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.478900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.478926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.479910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.479953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.480863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.480898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.481843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.481868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.482896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.482925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.483035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.420 [2024-07-14 19:06:24.483064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.420 qpair failed and we were unable to recover it. 00:34:36.420 [2024-07-14 19:06:24.483216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.483252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.483422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.483450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.483578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.483606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.483716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.483746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.483873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.483904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.484842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.484997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.485921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.485946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.486846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.486896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.487867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.487903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.488904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.488931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.489853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.489888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.490966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.490992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.491965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.491992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.492923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.493859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.493988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.494013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.494140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.494166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.494299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.494326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.494477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.421 [2024-07-14 19:06:24.494503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.421 qpair failed and we were unable to recover it. 00:34:36.421 [2024-07-14 19:06:24.494624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.494650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.494772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.494798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.494980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.495958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.495984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.496969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.496996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.497942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.497971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.498916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.498946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.499924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.499950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.500873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.500904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.501867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.501962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.502924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.502951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.503885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.503913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.504961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.504987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.505134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.505339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.505368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.505505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.505534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.505655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.505680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.505829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.505860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.506863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.506899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.507028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.507053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.422 [2024-07-14 19:06:24.507185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.422 [2024-07-14 19:06:24.507211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.422 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.507365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.507391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.507528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.507556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.507698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.507723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.507823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.507851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.508837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.508997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.509920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.509950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.510946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.510972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.511925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.511953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.512891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.512926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.513908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.513938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.514938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.514965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.515880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.515909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.516962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.516990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.517927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.517953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.518934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.518964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.519095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.519123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.519249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.519275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.519366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.519392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.519500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.423 [2024-07-14 19:06:24.519528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.423 qpair failed and we were unable to recover it. 00:34:36.423 [2024-07-14 19:06:24.519662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.519690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.519868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.519901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.520929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.520957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.521101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.521264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.521440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.521598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.521800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.521971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.522914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.522944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.523120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.523245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.523456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.523624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.523826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.523998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.524154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.524359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.524537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.524691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.524911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.524939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.525884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.525910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.526839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.526983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.527913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.527939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.528938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.528967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.529814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.529855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.530801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.530829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.531019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.531045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.424 qpair failed and we were unable to recover it. 00:34:36.424 [2024-07-14 19:06:24.531142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.424 [2024-07-14 19:06:24.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.531323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.531352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.531485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.531513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.531661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.531686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.531809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.531838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.531967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.531996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.532900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.532927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.533859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.533977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.534964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.534993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.535905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.535935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.536929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.536955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.537907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.537936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.538889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.538916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.539886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.539914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.540868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.540900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.541901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.541930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.542915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.542956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.543164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.543287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.543472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.543627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.425 [2024-07-14 19:06:24.543756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.425 qpair failed and we were unable to recover it. 00:34:36.425 [2024-07-14 19:06:24.543887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.543913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.544897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.544931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.545868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.545915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.546956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.546986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.547888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.547914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.548842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.548977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.549907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.549933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.550909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.551947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.551972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.426 [2024-07-14 19:06:24.552118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.426 [2024-07-14 19:06:24.552146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.426 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.552314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.552342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.552459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.552486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.552584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.552609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.552744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.552772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.552918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.553892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.554844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.554869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.555977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.556887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.556912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.557890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.557919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.427 qpair failed and we were unable to recover it. 00:34:36.427 [2024-07-14 19:06:24.558823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.427 [2024-07-14 19:06:24.558848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.558973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.559911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.559937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.560841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.560869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.561960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.561986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.562968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.562994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.563783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.563974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.564931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.564958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.565090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.565115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.565241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.565266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.565368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.565393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.565528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.428 [2024-07-14 19:06:24.565553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.428 qpair failed and we were unable to recover it. 00:34:36.428 [2024-07-14 19:06:24.565727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.565755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.565903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.565932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.566897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.566922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.567819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.567848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.568057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.568254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.568496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.568688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.568850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.568976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.569917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.569946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.570824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.570975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.571922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.571947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.572120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.572147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.572323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.572348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.572474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.572500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.429 [2024-07-14 19:06:24.572645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.429 [2024-07-14 19:06:24.572670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.429 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.572836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.572864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.573852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.573894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.574926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.574952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.575898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.575924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.576934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.576960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.577867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.577918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.578023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.578053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.578192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.430 [2024-07-14 19:06:24.578218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.430 qpair failed and we were unable to recover it. 00:34:36.430 [2024-07-14 19:06:24.578310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.578335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.578467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.578495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.578609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.578637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.578776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.578801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.578907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.578933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.579881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.579992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.580949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.580979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.581859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.581889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.582844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.582873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.583859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.583906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.584937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.431 [2024-07-14 19:06:24.584964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.431 qpair failed and we were unable to recover it. 00:34:36.431 [2024-07-14 19:06:24.585079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.585862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.585984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.586885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.586929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.587935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.587961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.588917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.588943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.589907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.589936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.590840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.590866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.591852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.591885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.592029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.432 [2024-07-14 19:06:24.592054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.432 qpair failed and we were unable to recover it. 00:34:36.432 [2024-07-14 19:06:24.592208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.592248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.592357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.592385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.592528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.592556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.592674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.592700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.592858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.592889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.593916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.593942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.594856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.594891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.595911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.596852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.596979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.597910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.597940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.433 qpair failed and we were unable to recover it. 00:34:36.433 [2024-07-14 19:06:24.598910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.433 [2024-07-14 19:06:24.598935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.599855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.599886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.600849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.600880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.601855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.601902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.602835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.602860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.603883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.603912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.604049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.604078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.604221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.604247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.604350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.604376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.434 [2024-07-14 19:06:24.604478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.434 [2024-07-14 19:06:24.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.434 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.604669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.604698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.604807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.604833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.604935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.604962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.605892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.605921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.606909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.606938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.607881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.607908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.608960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.609108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.609133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.609260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.609285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.609415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.609440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.718 qpair failed and we were unable to recover it. 00:34:36.718 [2024-07-14 19:06:24.609546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.718 [2024-07-14 19:06:24.609574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.609701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.609726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.609854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.609899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.610967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.610994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.611148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.611275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.611447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.611622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.611818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.611995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.612177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.612368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.612486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.612627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.612810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.612835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.613906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.613935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.614850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.614896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.719 [2024-07-14 19:06:24.615955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.719 [2024-07-14 19:06:24.615984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.719 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.616967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.616995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.617886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.617916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.618871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.618907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.619922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.619948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.620872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.620997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.621860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.720 qpair failed and we were unable to recover it. 00:34:36.720 [2024-07-14 19:06:24.621978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.720 [2024-07-14 19:06:24.622006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.622959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.622989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.623871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.623978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.624870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.625904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.625931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.626900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.626926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.721 [2024-07-14 19:06:24.627949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.721 [2024-07-14 19:06:24.627978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.721 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.628968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.628997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.629900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.629926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.630915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.630943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.631965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.722 [2024-07-14 19:06:24.632863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.722 qpair failed and we were unable to recover it. 00:34:36.722 [2024-07-14 19:06:24.632964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.632989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.633953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.633982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.634915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.634941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.635923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.635949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.636841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.636993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.637859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.637911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.638868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.638905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.639018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.723 [2024-07-14 19:06:24.639044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.723 qpair failed and we were unable to recover it. 00:34:36.723 [2024-07-14 19:06:24.639148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.639313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.639483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.639659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.639786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.639928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.639957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.640960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.640986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.641867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.641902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.642887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.642916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.643971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.643997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.644867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.644919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.645046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.645071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.724 qpair failed and we were unable to recover it. 00:34:36.724 [2024-07-14 19:06:24.645163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.724 [2024-07-14 19:06:24.645189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.645306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.645335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.645486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.645512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.645628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.645653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.645781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.645808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.645957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.645983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.646849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.646873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.647870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.647918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.648850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.648895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.649980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.650861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.650991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.651022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.651147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.651173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.651316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.651341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.651460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.725 [2024-07-14 19:06:24.651485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.725 qpair failed and we were unable to recover it. 00:34:36.725 [2024-07-14 19:06:24.651602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.651630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.651766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.651928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.651954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.652946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.652972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.653973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.653998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.654945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.654984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.655858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.655986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.726 [2024-07-14 19:06:24.656011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.726 qpair failed and we were unable to recover it. 00:34:36.726 [2024-07-14 19:06:24.656115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.656261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.656443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.656624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.656754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.656927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.656975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.657078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.657104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.657248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.727 [2024-07-14 19:06:24.657274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.727 qpair failed and we were unable to recover it. 00:34:36.727 [2024-07-14 19:06:24.657416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.657445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.657576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.657604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.657709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.657738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.657862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.657893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.657999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.658181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.658347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.658534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.658692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.658874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.658911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.659868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.659994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.660869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.660997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.661938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.661965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.662871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.662908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.663046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.663163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.663347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.663567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.728 [2024-07-14 19:06:24.663710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.728 qpair failed and we were unable to recover it. 00:34:36.728 [2024-07-14 19:06:24.663838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.663863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.663999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.664950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.665842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.665997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.666920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.666945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.667937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.667963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.668933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.668960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.669088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.669222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.669362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.669498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.729 [2024-07-14 19:06:24.669653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.729 qpair failed and we were unable to recover it. 00:34:36.729 [2024-07-14 19:06:24.669779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.669805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.669948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.669973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.670921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.670948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.671834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.671987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.672863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.672901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.673946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.673971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.674922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.674949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.675051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.675077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.675231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.675260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.730 [2024-07-14 19:06:24.675379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.730 [2024-07-14 19:06:24.675405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.730 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.675559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.675584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.675734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.675762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.675861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.675895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.676870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.676904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.677919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.677945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.678839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.678867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.679942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.679968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.680945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.680971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.681076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.681102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.681254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.681291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.681415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.731 [2024-07-14 19:06:24.681441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.731 qpair failed and we were unable to recover it. 00:34:36.731 [2024-07-14 19:06:24.681565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.681593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.681727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.681756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.681866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.681902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.682861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.682898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.683931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.683957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.684931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.684974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.685869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.685939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.686949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.686975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.687074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.732 [2024-07-14 19:06:24.687099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.732 qpair failed and we were unable to recover it. 00:34:36.732 [2024-07-14 19:06:24.687223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.687248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.687367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.687408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.687568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.687596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.687712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.687741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.687861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.687903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.687999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.688852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.688976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.689095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.689254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.689431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.689596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.689854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.689889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.690864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.690921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.691908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.691939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.692844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.692979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.693004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.693131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.693174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.693315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.693343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.693450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.693478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.733 qpair failed and we were unable to recover it. 00:34:36.733 [2024-07-14 19:06:24.693663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.733 [2024-07-14 19:06:24.693692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.693828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.693856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.693977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.694952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.694978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.695840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.695868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.696890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.696933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.697954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.697981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.698965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.698991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.699113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.699138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.699279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.699324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.699463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.699491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.699653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.734 [2024-07-14 19:06:24.699796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.734 [2024-07-14 19:06:24.699821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.734 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.699956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.699982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.700939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.700966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.701841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.701873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.702838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.702978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.703135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.703268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.703456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.703637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.703839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.703867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.704961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.704987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.705139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.705165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.705324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.705381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.705527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.705556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.705706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.705734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.705918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.705946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.706051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.706078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.706175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.706201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.706322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.735 [2024-07-14 19:06:24.706350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.735 qpair failed and we were unable to recover it. 00:34:36.735 [2024-07-14 19:06:24.706479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.706508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.706641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.706671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.706790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.706819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.706933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.706959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3755356 Killed "${NVMF_APP[@]}" "$@" 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.707883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.707908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.708030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.708058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:36.736 [2024-07-14 19:06:24.708227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.708253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:36.736 [2024-07-14 19:06:24.708425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:36.736 [2024-07-14 19:06:24.708454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.708574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.708606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.708752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.736 [2024-07-14 19:06:24.708778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.708914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.708940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.709915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.709944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.710903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.710929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.711029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.711055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.711231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.711260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.711390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.711415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.736 [2024-07-14 19:06:24.711513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.736 [2024-07-14 19:06:24.711539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.736 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.711726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.711751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.711875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.711906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.712002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.712145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.712283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3755879 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:36.737 [2024-07-14 19:06:24.712463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3755879 00:34:36.737 [2024-07-14 19:06:24.712594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3755879 ']' 00:34:36.737 [2024-07-14 19:06:24.712743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.737 [2024-07-14 19:06:24.712929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.712958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:36.737 [2024-07-14 19:06:24.713094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:36.737 [2024-07-14 19:06:24.713298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 19:06:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.737 [2024-07-14 19:06:24.713445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.713618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.713760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.713916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.713943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.714069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.714095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.714228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.714258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.714421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.714458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.714611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.714648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.714766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.714819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.715840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.715866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.716003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.716029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.716158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.716199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.716351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.716381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.716482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.716508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.717348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.717381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.717568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.717597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.718671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.718705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.718893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.718923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.719665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.719697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.737 qpair failed and we were unable to recover it. 00:34:36.737 [2024-07-14 19:06:24.719852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.737 [2024-07-14 19:06:24.719901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.720894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.720931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.721918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.721948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.722864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.722899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.723013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.723039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.723179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.723207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.723322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.723348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.724218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.724251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.724446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.724473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.724600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.724626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.724751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.724776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.724916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.724943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.725049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.725075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.725192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.726301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.726334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.726483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.726662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.726694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.726807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.726834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.726989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.727139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.727319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.727480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.727635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.738 [2024-07-14 19:06:24.727788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.738 [2024-07-14 19:06:24.727813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.738 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.727909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.727935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.728828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.728977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.729131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.729315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.729499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.729659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.729857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.729903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.730898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.730925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.731928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.731954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.732955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.732981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.733865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.739 [2024-07-14 19:06:24.733906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.739 qpair failed and we were unable to recover it. 00:34:36.739 [2024-07-14 19:06:24.734059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.734924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.734951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.735695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.735725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.735893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.736964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.736992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.737910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.737937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.738873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.738904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.740 [2024-07-14 19:06:24.739819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.740 qpair failed and we were unable to recover it. 00:34:36.740 [2024-07-14 19:06:24.739926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.739953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.740922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.740948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.741848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.741905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.742918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.742945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.743947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.743973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.744871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.744903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.745026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.745157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.745327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.745482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.741 [2024-07-14 19:06:24.745597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.741 qpair failed and we were unable to recover it. 00:34:36.741 [2024-07-14 19:06:24.745704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.745729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.745856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.745898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.746950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.746976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.747852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.747888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.748848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.748982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.749951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.749978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.750850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.750978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.751004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.751130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.751155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.751290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.751315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.751412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.742 [2024-07-14 19:06:24.751438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.742 qpair failed and we were unable to recover it. 00:34:36.742 [2024-07-14 19:06:24.751541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.751566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.751687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.751712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.751887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.751913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.752937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.752963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.753885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.753912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.754906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.754937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.755844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.755869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.756909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.756935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.757040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.757065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.743 qpair failed and we were unable to recover it. 00:34:36.743 [2024-07-14 19:06:24.757165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.743 [2024-07-14 19:06:24.757190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.757303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.757329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.757424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.757449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.757548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.757574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.757674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.757699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.757852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.757905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.758860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.758893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.759870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.759902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:36.744 [2024-07-14 19:06:24.760584] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.744 [2024-07-14 19:06:24.760587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.760862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.760993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.761848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.761980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.762135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.762291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.762446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.762581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.744 qpair failed and we were unable to recover it. 00:34:36.744 [2024-07-14 19:06:24.762739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.744 [2024-07-14 19:06:24.762764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.762859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.762892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.762984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.763860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.763892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.764917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.764943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.765932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.765958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.766859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.766889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.767880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.745 [2024-07-14 19:06:24.767906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.745 qpair failed and we were unable to recover it. 00:34:36.745 [2024-07-14 19:06:24.768024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.768919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.768945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.769874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.769998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.770871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.770901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.771930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.771956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.772866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.772979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.773141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.773288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.773423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.773574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.746 [2024-07-14 19:06:24.773702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.746 [2024-07-14 19:06:24.773727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.746 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.773852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.773885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.774896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.774995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.775909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.775936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.776872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.776902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.777890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.777917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.778887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.778977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.779003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.779093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.779118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.779256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.779281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.779407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.747 [2024-07-14 19:06:24.779433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.747 qpair failed and we were unable to recover it. 00:34:36.747 [2024-07-14 19:06:24.779531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.779653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.779682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.779835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.779860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.779959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.779984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.780915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.780941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.781965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.781991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.782870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.782902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.783868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.783908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.784868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.784900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.785029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.785055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.748 qpair failed and we were unable to recover it. 00:34:36.748 [2024-07-14 19:06:24.785205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.748 [2024-07-14 19:06:24.785230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.785331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.785453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.785479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.785612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.785637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.785764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.785790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.785908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.785935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.786942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.787921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.787947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.788823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.788983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.789864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.789993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.790019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.790117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.790143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.749 [2024-07-14 19:06:24.790275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.749 [2024-07-14 19:06:24.790301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.749 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.790431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.790457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.790555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.790580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.790700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.790725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.790830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.790855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.790992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.791940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.792861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.792978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.793911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.793937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.794898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.794924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.750 [2024-07-14 19:06:24.795735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.750 qpair failed and we were unable to recover it. 00:34:36.750 [2024-07-14 19:06:24.795856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.795891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.796953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.796978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.797918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.797944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.798971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.798996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.799952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.799978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.751 [2024-07-14 19:06:24.800689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.800919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.800944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.801106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.801131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.801268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.801293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.801446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.751 [2024-07-14 19:06:24.801471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.751 qpair failed and we were unable to recover it. 00:34:36.751 [2024-07-14 19:06:24.801589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.801614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.801719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.801744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.801865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.801911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.802922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.802948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.803969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.803995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.804941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.804967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.805875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.805985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.806865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.806897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.807020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.807045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.807178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.752 [2024-07-14 19:06:24.807203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.752 qpair failed and we were unable to recover it. 00:34:36.752 [2024-07-14 19:06:24.807327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.807352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.807478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.807503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.807704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.807729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.807832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.807858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.807964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.807989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.808948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.808974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.809869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.809917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.810853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.810984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.811922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.811948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.812146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.812172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.812320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.812345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.753 [2024-07-14 19:06:24.812436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.753 [2024-07-14 19:06:24.812461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.753 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.812559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.812584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.812705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.812731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.812933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.812959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.813910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.813940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.814899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.814925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.815866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.815993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.816930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.816957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.817934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.817960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.754 [2024-07-14 19:06:24.818091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.754 [2024-07-14 19:06:24.818116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.754 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.818939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.818965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.819909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.819935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.820842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.820973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.821854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.821902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.822853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.822885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.755 [2024-07-14 19:06:24.823902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.755 [2024-07-14 19:06:24.823929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.755 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.824845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.824870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.825934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.825960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.826859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.826990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.827968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.827994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.828927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.828953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.756 [2024-07-14 19:06:24.829758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.756 qpair failed and we were unable to recover it. 00:34:36.756 [2024-07-14 19:06:24.829885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.830866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.830896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.831798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.831837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bdf20 (9): Bad file descriptor 00:34:36.757 [2024-07-14 19:06:24.832076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.832937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.832965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 [2024-07-14 19:06:24.833397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.833855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.833987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.834889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.834985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.835010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.835101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.835131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.835267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.835292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.757 qpair failed and we were unable to recover it. 00:34:36.757 [2024-07-14 19:06:24.835384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.757 [2024-07-14 19:06:24.835410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.835529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.835554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.835682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.835708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.835828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.835853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.835968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.836906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.836933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.837864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.837895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.838951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.838978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.839970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.839997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.840114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.840241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.840387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.840505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.758 [2024-07-14 19:06:24.840634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.758 qpair failed and we were unable to recover it. 00:34:36.758 [2024-07-14 19:06:24.840741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.840769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.840898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.840930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.841857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.841889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.842913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.842939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.843909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.843936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.844993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.845880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.845907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.846065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.846091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.846191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.846219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.846343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.846368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.846497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.846523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.759 [2024-07-14 19:06:24.846610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.759 [2024-07-14 19:06:24.846635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.759 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.846752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.846778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.846874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.846905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.847941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.847968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.848814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.848982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.849833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.849862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.850897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.850936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.851898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.851994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.852019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.852146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.852172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.852292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.852317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.852446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.760 [2024-07-14 19:06:24.852471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.760 qpair failed and we were unable to recover it. 00:34:36.760 [2024-07-14 19:06:24.852621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.852646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.852742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.852767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.852863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.852894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.853964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.853991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.854813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.854968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.855899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.855999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.856942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.856982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.857903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.857932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.761 qpair failed and we were unable to recover it. 00:34:36.761 [2024-07-14 19:06:24.858964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.761 [2024-07-14 19:06:24.858992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.859841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.859888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.860920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.860960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.861931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.861958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.862907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.862935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.863060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.863086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.863188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.762 [2024-07-14 19:06:24.863215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.762 qpair failed and we were unable to recover it. 00:34:36.762 [2024-07-14 19:06:24.863347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.863376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.863502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.863528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.863661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.863689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.863794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.863820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.863974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.864853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.864994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.865957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.865986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.866863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.866895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.867950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.867977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.868826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.868865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.763 [2024-07-14 19:06:24.869659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.763 [2024-07-14 19:06:24.869685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.763 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.869813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.869842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.870868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.870903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.871895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.871934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.872955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.872982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.873858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.873966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.874892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.874925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.875925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.875954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.876078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.764 [2024-07-14 19:06:24.876104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.764 qpair failed and we were unable to recover it. 00:34:36.764 [2024-07-14 19:06:24.876198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.876351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.876504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.876619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.876770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.876903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.876929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.877971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.877997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.878855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.878889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.879885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.879981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.880920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.880949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.881775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.881801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.882018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.882058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.882165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.882194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.882318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.882345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.882442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.765 [2024-07-14 19:06:24.882469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.765 qpair failed and we were unable to recover it. 00:34:36.765 [2024-07-14 19:06:24.882567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.882594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.882725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.882751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.882901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.882927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.883887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.883927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.884867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.884911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.885898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.885926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.886847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.886969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.887862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.887989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.888015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.888236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.766 [2024-07-14 19:06:24.888275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.766 qpair failed and we were unable to recover it. 00:34:36.766 [2024-07-14 19:06:24.888434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.888462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.888565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.888592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.888686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.888712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.888804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.888832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.888948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.888987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.889848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.889894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.890914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.890950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.891860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.891894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.892826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.892852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.893928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.893954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.767 [2024-07-14 19:06:24.894796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.767 [2024-07-14 19:06:24.894824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.767 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.894932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.894963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.895831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.895977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.896865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.896982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.897893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.897921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.898887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.898915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.899872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.899906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.900868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.900903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.901000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.768 [2024-07-14 19:06:24.901026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.768 qpair failed and we were unable to recover it. 00:34:36.768 [2024-07-14 19:06:24.901157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.901290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.901444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.901596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.901724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.901891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.901919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.902954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.902982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.903970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.903998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.904859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.904976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.905959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.905986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.906970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.906997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.907095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.907122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.907220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.907247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.907344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.769 [2024-07-14 19:06:24.907372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.769 qpair failed and we were unable to recover it. 00:34:36.769 [2024-07-14 19:06:24.907479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.907508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.907657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.907696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.907832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.907859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.907973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.907999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.908923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.908950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.909959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.909986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.910890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.910930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.911857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.911974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.770 [2024-07-14 19:06:24.912888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.770 [2024-07-14 19:06:24.912928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.770 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.913864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.913980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.914882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.914911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.915865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.915983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.916865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.916900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.917888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.917918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:36.771 [2024-07-14 19:06:24.918796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.771 [2024-07-14 19:06:24.918821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:36.771 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.918930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.918959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.919941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.919968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.920939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.920968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.921973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.921999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.922914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.922943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.923066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.923197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.923315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.923433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.065 [2024-07-14 19:06:24.923558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.065 qpair failed and we were unable to recover it. 00:34:37.065 [2024-07-14 19:06:24.923659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.923686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.923807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.923833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.923940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.923968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.066 [2024-07-14 19:06:24.924067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.066 [2024-07-14 19:06:24.924100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 [2024-07-14 19:06:24.924110] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.066 [2024-07-14 19:06:24.924134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.066 [2024-07-14 19:06:24.924192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:37.066 [2024-07-14 19:06:24.924245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:37.066 [2024-07-14 19:06:24.924311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:37.066 [2024-07-14 19:06:24.924475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 [2024-07-14 19:06:24.924433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.924890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.924999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.925866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.925912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.926946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.927848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.927972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.066 [2024-07-14 19:06:24.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.066 qpair failed and we were unable to recover it. 00:34:37.066 [2024-07-14 19:06:24.928997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.929847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.929895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.930910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.930949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.931912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.931938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.932928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.932955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.933959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.933984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.934111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.934135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.934230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.934255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.934356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.934380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.067 qpair failed and we were unable to recover it. 00:34:37.067 [2024-07-14 19:06:24.934483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.067 [2024-07-14 19:06:24.934508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.934608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.934633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.934757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.934783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.934905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.934944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.935936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.935962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.936839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.936991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.937865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.937897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.938819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.938969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.939906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.939933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.940028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.940055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.940196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.940235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.940406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.940445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.068 [2024-07-14 19:06:24.940556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.068 [2024-07-14 19:06:24.940583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.068 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.940682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.940708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.940833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.940858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.940961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.940986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.941887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.941914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.942849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.942902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.943832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.943975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.944966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.944996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.945128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.945155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.945360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.069 qpair failed and we were unable to recover it. 00:34:37.069 [2024-07-14 19:06:24.945514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.069 [2024-07-14 19:06:24.945540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.945637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.945664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.945774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.945813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.945922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.945951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.946866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.946914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.947868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.947900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.948921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.948948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.949925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.949964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.950929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.950957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.951061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.070 [2024-07-14 19:06:24.951085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.070 qpair failed and we were unable to recover it. 00:34:37.070 [2024-07-14 19:06:24.951181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.951920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.951949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.952833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.952872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.953942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.953968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.954874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.954905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.955866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.955978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.956005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.956117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.956161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.956269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.956297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.956420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.956447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.071 [2024-07-14 19:06:24.956599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.071 [2024-07-14 19:06:24.956624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.071 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.956719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.956745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.956873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.956907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.956993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.957939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.957966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.958953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.958980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.959874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.959926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.960910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.960938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.961965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.961993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.962099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.962125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.962254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.072 [2024-07-14 19:06:24.962279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.072 qpair failed and we were unable to recover it. 00:34:37.072 [2024-07-14 19:06:24.962367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.962392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.962492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.962519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.962644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.962671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.962806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.962845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.962965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.963861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.963975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.964848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.964999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.965863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.965896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.966897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.966935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.967072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.967101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.073 qpair failed and we were unable to recover it. 00:34:37.073 [2024-07-14 19:06:24.967204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.073 [2024-07-14 19:06:24.967231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.967326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.967352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.967481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.967507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.967606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.967633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.967778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.967816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.967954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.967982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.968926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.968952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.969943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.969982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.970966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.970992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.971899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.971997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.074 qpair failed and we were unable to recover it. 00:34:37.074 [2024-07-14 19:06:24.972790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.074 [2024-07-14 19:06:24.972816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.972917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.972944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.973842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.973889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.974941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.974968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.975957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.975985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.976913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.976939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.977864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.977980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.978007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.978131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.978157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.978253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.978280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.978384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.978409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.978540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.075 [2024-07-14 19:06:24.978566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.075 qpair failed and we were unable to recover it. 00:34:37.075 [2024-07-14 19:06:24.978671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.978698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.978796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.978823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.978936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.978962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.979943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.979971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.980895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.980934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.981901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.981929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.982871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.983867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.983974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.076 [2024-07-14 19:06:24.984118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.076 [2024-07-14 19:06:24.984157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.076 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.984282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.984309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.984461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.984488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.984612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.984639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.984727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.984759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.984891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.984919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.985908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.985936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.986908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.986937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.987885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.987913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.988875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.988914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.989966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.077 [2024-07-14 19:06:24.989993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.077 qpair failed and we were unable to recover it. 00:34:37.077 [2024-07-14 19:06:24.990133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.990894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.990998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.991931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.991958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.992833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.992872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.993915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.993943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.994966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.994993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.995097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.078 qpair failed and we were unable to recover it. 00:34:37.078 [2024-07-14 19:06:24.995249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.078 [2024-07-14 19:06:24.995275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.995375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.995401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.995520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.995547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.995687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.995726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.995838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.995865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.995969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.995996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.996820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.996848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.997868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.997913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.998937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.998964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:24.999892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:24.999919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.079 [2024-07-14 19:06:25.000929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.079 [2024-07-14 19:06:25.000967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.079 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.001848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.001976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.002966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.002995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.003886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.003913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.004903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.004929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.005861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.005996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.006035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.006139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.006166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.006293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.006319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.006456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.006481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.080 qpair failed and we were unable to recover it. 00:34:37.080 [2024-07-14 19:06:25.006578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.080 [2024-07-14 19:06:25.006604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.006738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.006777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.006884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.006912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.007940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.007979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.008923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.008953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.009919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.009947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.010900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.010928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.011869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.011915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.012045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.012072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.012175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.012200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.081 [2024-07-14 19:06:25.012323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.081 [2024-07-14 19:06:25.012349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.081 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.012472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.012498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.012595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.012622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.012718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.012746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.012847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.012880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.012987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.013938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.013965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.014955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.014995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.015885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.015985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.016853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.016977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.017004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.017104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.082 [2024-07-14 19:06:25.017132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.082 qpair failed and we were unable to recover it. 00:34:37.082 [2024-07-14 19:06:25.017296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.017323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.017448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.017475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.017567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.017593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.017722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.017748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.017870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.017902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.017997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.018874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.018905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.019958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.019984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.020963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.020990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.021847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.021978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.022905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.022932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.083 qpair failed and we were unable to recover it. 00:34:37.083 [2024-07-14 19:06:25.023029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.083 [2024-07-14 19:06:25.023054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.023887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.023926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.024917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.024946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.025869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.025979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.026922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.026948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.027883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.027923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.084 [2024-07-14 19:06:25.028672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.084 [2024-07-14 19:06:25.028700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.084 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.028809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.028838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.028941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.028968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.029907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.029934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.030852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.030884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.031754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.031978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.032872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.032930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.033917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.033949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.034038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.034064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.034156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.034182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.034328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.085 [2024-07-14 19:06:25.034479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.085 qpair failed and we were unable to recover it. 00:34:37.085 [2024-07-14 19:06:25.034612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.034643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.034762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.034788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.034909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.034936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.035929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.035956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.036888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.036916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.037924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.037951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.038942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.038971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.039063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.039089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.039184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.039210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.039311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.039337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.039462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.039489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.086 [2024-07-14 19:06:25.039587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.086 [2024-07-14 19:06:25.039614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.086 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.039754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.039793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.039907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.039943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.040861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.040906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.041900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.041928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.042948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.042976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.043969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.043997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.044821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.044859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.045009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.045037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.045137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.045164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.045269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.045295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.045419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.045445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.087 qpair failed and we were unable to recover it. 00:34:37.087 [2024-07-14 19:06:25.045540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.087 [2024-07-14 19:06:25.045566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.045657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.045683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.045790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.045827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.045948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.045975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.046897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.046923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.047873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.047907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.048955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.048983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.049890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.049991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.088 [2024-07-14 19:06:25.050968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.088 [2024-07-14 19:06:25.050996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.088 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.051971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.051999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.052888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.052915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.053917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.053945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.054859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.054899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.055873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.055990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.056018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.056116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.056143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.056243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.056269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.089 [2024-07-14 19:06:25.056363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.089 [2024-07-14 19:06:25.056388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.089 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.056512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.056538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.056663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.056689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.056785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.056812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.056911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.056938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.057936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.057975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.058960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.058988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.059859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.059904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.060952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.060979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.061075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.061102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:37.090 [2024-07-14 19:06:25.061191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.061218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:37.090 [2024-07-14 19:06:25.061339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.061366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:37.090 [2024-07-14 19:06:25.061511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.061538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.061633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.090 [2024-07-14 19:06:25.061660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.090 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.090 qpair failed and we were unable to recover it. 00:34:37.090 [2024-07-14 19:06:25.061757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.061783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.061868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.061904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.061999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.062897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.062925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.063884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.063991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.064942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.064968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.065947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.065973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.066070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.066095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.066220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.066245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.066338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.066364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.091 [2024-07-14 19:06:25.066458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.091 [2024-07-14 19:06:25.066483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.091 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.066610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.066636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.066758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.066783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.066874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.066908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.067890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.068936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.068963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.069852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.070895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.070939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.092 [2024-07-14 19:06:25.071830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.092 qpair failed and we were unable to recover it. 00:34:37.092 [2024-07-14 19:06:25.071924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.071950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.072902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.072927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.073894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.073997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.074865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.074901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.075916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.075943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.076902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.076929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.077029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.077060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.077163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.077189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.093 qpair failed and we were unable to recover it. 00:34:37.093 [2024-07-14 19:06:25.077342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.093 [2024-07-14 19:06:25.077369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.077469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.077495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.077590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.077615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.077713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.077738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.077899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.077939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.078949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.078977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.079908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.079936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.080934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.080960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.081763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.081973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.094 [2024-07-14 19:06:25.082900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.094 [2024-07-14 19:06:25.082926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.094 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.095 [2024-07-14 19:06:25.083760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.083910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.083939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:37.095 [2024-07-14 19:06:25.084036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.084158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.095 [2024-07-14 19:06:25.084321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.084473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.095 [2024-07-14 19:06:25.084628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.084757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.084910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.084938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.085854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.085885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.086909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.086945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.087045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.087071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.087171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.087205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.087367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.087393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.087493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.087519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.095 qpair failed and we were unable to recover it. 00:34:37.095 [2024-07-14 19:06:25.087612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.095 [2024-07-14 19:06:25.087637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.087759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.087784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.087909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.087946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.088898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.088924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.089888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.089916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.090868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.090917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.091897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.091925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.092861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.092974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.093000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.093097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.093123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.093215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.096 [2024-07-14 19:06:25.093240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.096 qpair failed and we were unable to recover it. 00:34:37.096 [2024-07-14 19:06:25.093337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.093362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.093482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.093508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.093597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.093755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.093780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.093893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.093932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.094863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.094893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.095845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.095873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.096960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.096986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.097900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.097947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.098971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.098999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.097 qpair failed and we were unable to recover it. 00:34:37.097 [2024-07-14 19:06:25.099108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.097 [2024-07-14 19:06:25.099136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.099962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.099990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.100965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.100991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.101970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.101996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.102863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.102900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.103871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.103995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.104972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.104999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.105127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.098 [2024-07-14 19:06:25.105153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.098 qpair failed and we were unable to recover it. 00:34:37.098 [2024-07-14 19:06:25.105281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.105405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.105533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.105663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.105811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.105957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.105984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.106971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.106998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.107853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.107982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.108973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.109115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.109273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 Malloc0 00:34:37.099 [2024-07-14 19:06:25.109300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.109424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 [2024-07-14 19:06:25.109565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.099 [2024-07-14 19:06:25.109716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:37.099 [2024-07-14 19:06:25.109841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.109867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.099 [2024-07-14 19:06:25.110001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.110027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.099 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.099 [2024-07-14 19:06:25.110155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.099 [2024-07-14 19:06:25.110183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.099 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.110330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.110460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.110616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.110730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.110845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.110982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.111837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.111972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.112891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.112986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.100 [2024-07-14 19:06:25.113004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.113936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.113962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.114947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.114974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.115100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.115126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.115241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.115267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.115416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.115442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.115563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.115588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.100 [2024-07-14 19:06:25.115705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.100 [2024-07-14 19:06:25.115731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.100 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.115834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.115859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.116893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.116997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.117906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.117936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.118853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.118898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.119946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.119985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.120885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.120924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.121032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.121060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 [2024-07-14 19:06:25.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.101 [2024-07-14 19:06:25.121335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.101 [2024-07-14 19:06:25.121360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.101 qpair failed and we were unable to recover it. 00:34:37.101 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:37.101 [2024-07-14 19:06:25.121464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.121489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.121593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.121618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.102 [2024-07-14 19:06:25.121725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.121752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.121847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.121873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.121990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.122887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.122915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.123931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.123957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.124838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.124980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.125889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.125915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.126835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.102 qpair failed and we were unable to recover it. 00:34:37.102 [2024-07-14 19:06:25.126987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.102 [2024-07-14 19:06:25.127013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.127928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.127956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.128906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.128945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.129054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.129202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.103 [2024-07-14 19:06:25.129360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:37.103 [2024-07-14 19:06:25.129486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.103 [2024-07-14 19:06:25.129641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.129778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.129925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.129964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.130894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.130921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.131920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.131946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.132046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.132071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.132164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.132189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.103 [2024-07-14 19:06:25.132282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.103 [2024-07-14 19:06:25.132307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.103 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.132426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.132452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.132544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.132569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.132666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.132691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.132783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.132811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.132933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.132973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.133931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.133970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.134888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.134927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.135895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.135989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.136828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.136997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.137025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.104 [2024-07-14 19:06:25.137135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.104 [2024-07-14 19:06:25.137168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.104 qpair failed and we were unable to recover it. 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.105 [2024-07-14 19:06:25.137309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.137336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:37.105 [2024-07-14 19:06:25.137465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.137492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.137602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.105 [2024-07-14 19:06:25.137629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.137745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.137784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.137923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.137952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a58000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.138952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.139898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.139992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a60000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8a50000b90 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.140889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.140984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.141010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.141103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:37.105 [2024-07-14 19:06:25.141128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13aff20 with addr=10.0.0.2, port=4420 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.141380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.105 [2024-07-14 19:06:25.143733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.105 [2024-07-14 19:06:25.143866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.105 [2024-07-14 19:06:25.143900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.105 [2024-07-14 19:06:25.143916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.105 [2024-07-14 19:06:25.143929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.105 [2024-07-14 19:06:25.143964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.105 19:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3755380 00:34:37.105 [2024-07-14 19:06:25.153531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.105 [2024-07-14 19:06:25.153639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.105 [2024-07-14 19:06:25.153665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.105 [2024-07-14 19:06:25.153680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.105 [2024-07-14 19:06:25.153692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.105 [2024-07-14 19:06:25.153720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.105 qpair failed and we were unable to recover it. 00:34:37.105 [2024-07-14 19:06:25.163572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.105 [2024-07-14 19:06:25.163676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.105 [2024-07-14 19:06:25.163708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.105 [2024-07-14 19:06:25.163723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.105 [2024-07-14 19:06:25.163735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.163762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.173584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.173697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.173722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.173737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.173749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.173777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.183541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.183692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.183721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.183736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.183751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.183794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.193588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.193688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.193715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.193729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.193742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.193770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.203601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.203731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.203757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.203772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.203784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.203831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.213585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.213689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.213715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.213729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.213741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.213768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.223624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.223719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.223745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.223759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.223771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.223799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.233643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.233742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.233768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.233782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.233794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.233821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.243666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.243774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.243799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.243814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.243826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.243854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.106 [2024-07-14 19:06:25.253708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.106 [2024-07-14 19:06:25.253813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.106 [2024-07-14 19:06:25.253843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.106 [2024-07-14 19:06:25.253858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.106 [2024-07-14 19:06:25.253870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.106 [2024-07-14 19:06:25.253905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.106 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.263750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.263851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.263883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.263902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.263915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.263945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.273909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.274035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.274061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.274076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.274088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.274115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.283806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.283914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.283939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.283953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.283966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.283993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.293817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.293943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.293968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.293983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.294002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.294030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.303866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.303976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.304002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.304016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.304028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.304059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.313908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.314006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.314032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.314046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.314058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.314086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.323974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.324076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.324101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.324115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.324127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.367 [2024-07-14 19:06:25.324155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.367 qpair failed and we were unable to recover it. 00:34:37.367 [2024-07-14 19:06:25.333949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.367 [2024-07-14 19:06:25.334058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.367 [2024-07-14 19:06:25.334083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.367 [2024-07-14 19:06:25.334097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.367 [2024-07-14 19:06:25.334109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.334137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.343996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.344099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.344124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.344138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.344150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.344180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.354029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.354126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.354152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.354166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.354178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.354205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.364030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.364132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.364158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.364172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.364184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.364211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.374046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.374150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.374175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.374190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.374202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.374229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.384087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.384184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.384209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.384223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.384244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.384272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.394124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.394222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.394248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.394263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.394275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.394303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.404236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.404341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.404366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.404380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.404393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.404420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.414193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.414322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.414347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.414361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.414373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.414404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.424203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.424328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.424353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.424367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.424379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.424409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.434232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.434329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.434353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.434367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.434380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.434407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.444246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.444359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.444385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.444400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.444412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.444440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.454319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.454423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.454448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.454462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.454475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.454502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.464287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.464381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.464406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.464420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.464432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.368 [2024-07-14 19:06:25.464460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.368 qpair failed and we were unable to recover it. 00:34:37.368 [2024-07-14 19:06:25.474343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.368 [2024-07-14 19:06:25.474441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.368 [2024-07-14 19:06:25.474466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.368 [2024-07-14 19:06:25.474480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.368 [2024-07-14 19:06:25.474498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.474527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.484338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.484436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.484461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.484474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.484487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.484514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.494488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.494585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.494626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.494640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.494652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.494696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.504435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.504543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.504572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.504586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.504601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.504629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.514530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.514639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.514680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.514694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.514706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.514748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.524471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.524571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.524597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.524612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.524624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.524652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.534592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.534698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.534723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.534737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.534749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.534777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.544524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.544620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.544646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.544660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.544672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.544699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.554574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.554686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.554712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.554726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.554738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.554766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.564671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.564776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.564817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.564836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.564849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.564901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.574644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.574752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.574778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.574792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.574804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.574832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.369 [2024-07-14 19:06:25.584655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.369 [2024-07-14 19:06:25.584781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.369 [2024-07-14 19:06:25.584806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.369 [2024-07-14 19:06:25.584820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.369 [2024-07-14 19:06:25.584833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.369 [2024-07-14 19:06:25.584863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.369 qpair failed and we were unable to recover it. 00:34:37.630 [2024-07-14 19:06:25.594672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.630 [2024-07-14 19:06:25.594772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.630 [2024-07-14 19:06:25.594798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.630 [2024-07-14 19:06:25.594812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.630 [2024-07-14 19:06:25.594824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.630 [2024-07-14 19:06:25.594855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.630 qpair failed and we were unable to recover it. 00:34:37.630 [2024-07-14 19:06:25.604711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.630 [2024-07-14 19:06:25.604817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.630 [2024-07-14 19:06:25.604843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.630 [2024-07-14 19:06:25.604857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.630 [2024-07-14 19:06:25.604869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.630 [2024-07-14 19:06:25.604907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.630 qpair failed and we were unable to recover it. 00:34:37.630 [2024-07-14 19:06:25.614734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.630 [2024-07-14 19:06:25.614851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.630 [2024-07-14 19:06:25.614885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.630 [2024-07-14 19:06:25.614903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.630 [2024-07-14 19:06:25.614916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.630 [2024-07-14 19:06:25.614944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.630 qpair failed and we were unable to recover it. 00:34:37.630 [2024-07-14 19:06:25.624751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.630 [2024-07-14 19:06:25.624883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.630 [2024-07-14 19:06:25.624917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.630 [2024-07-14 19:06:25.624931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.630 [2024-07-14 19:06:25.624943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.624972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.634786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.634931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.634957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.634971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.634983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.635011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.644834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.644950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.644976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.644990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.645002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.645029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.654859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.654969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.654995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.655014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.655028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.655057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.664886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.664989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.665014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.665029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.665041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.665072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.674890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.674997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.675021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.675036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.675048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.675075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.684925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.685025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.685051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.685064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.685077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.685104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.694957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.695059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.695084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.695098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.695110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.695138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.704984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.705082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.705108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.705122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.705135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.705165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.715054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.715153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.715179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.715193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.715205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.715233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.725062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.725176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.725201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.725215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.725228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.725256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.735092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.735190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.735214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.735228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.735240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.735268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.745112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.745213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.745238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.745259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.745272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.745301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.755206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.755302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.755329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.755345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.755358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.631 [2024-07-14 19:06:25.755386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.631 qpair failed and we were unable to recover it. 00:34:37.631 [2024-07-14 19:06:25.765174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.631 [2024-07-14 19:06:25.765286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.631 [2024-07-14 19:06:25.765311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.631 [2024-07-14 19:06:25.765325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.631 [2024-07-14 19:06:25.765337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.765364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.775237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.775342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.775367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.775382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.775394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.775423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.785231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.785326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.785351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.785365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.785378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.785405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.795234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.795327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.795352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.795366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.795378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.795406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.805258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.805359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.805385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.805399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.805411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.805439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.815302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.815415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.815440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.815454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.815467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.815494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.825345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.825443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.825468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.825484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.825496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.825525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.835384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.835483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.835517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.835533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.835546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.835575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.845415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.845528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.632 [2024-07-14 19:06:25.845553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.632 [2024-07-14 19:06:25.845568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.632 [2024-07-14 19:06:25.845580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.632 [2024-07-14 19:06:25.845607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.632 qpair failed and we were unable to recover it. 00:34:37.632 [2024-07-14 19:06:25.855442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.632 [2024-07-14 19:06:25.855573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.855599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.855615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.855631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.855660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.865468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.865570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.865595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.865610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.865625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.865652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.875472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.875567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.875592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.875606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.875618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.875651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.885502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.885635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.885660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.885673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.885686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.885713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.895540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.895659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.895684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.895698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.895710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.895737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.905641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.905747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.905772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.905787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.892 [2024-07-14 19:06:25.905799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.892 [2024-07-14 19:06:25.905828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.892 qpair failed and we were unable to recover it. 00:34:37.892 [2024-07-14 19:06:25.915600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.892 [2024-07-14 19:06:25.915721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.892 [2024-07-14 19:06:25.915746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.892 [2024-07-14 19:06:25.915760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.915773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.915800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.925642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.925753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.925782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.925797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.925809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.925836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.935761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.935918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.935943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.935957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.935969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.935997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.945716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.945818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.945843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.945857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.945870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.945905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.955833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.955936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.955962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.955975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.955988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.956015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.965775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.965883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.965909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.965923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.965935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.965967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.975832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.975942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.975966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.975981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.975993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.976020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.985843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.985950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.985976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.985990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.986002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.986029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:25.995860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:25.996010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:25.996035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:25.996049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:25.996062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:25.996089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.005925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.006060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.006085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.006099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.006111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.006138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.015911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.016009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.016040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.016055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.016067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.016097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.025953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.026053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.026078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.026092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.026104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.026131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.035937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.036036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.036061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.036075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.036087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.036115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.045969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.046070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.046096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.046110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.046122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.046150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.893 [2024-07-14 19:06:26.056019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.893 [2024-07-14 19:06:26.056119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.893 [2024-07-14 19:06:26.056144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.893 [2024-07-14 19:06:26.056158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.893 [2024-07-14 19:06:26.056176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.893 [2024-07-14 19:06:26.056205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.893 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.066099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.066210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.066236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.066251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.066263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.066291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.076044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.076138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.076164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.076178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.076190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.076218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.086164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.086267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.086292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.086306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.086318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.086346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.096113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.096217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.096242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.096257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.096272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.096299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.106160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.106269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.106294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.106308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.106320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.106347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:37.894 [2024-07-14 19:06:26.116170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.894 [2024-07-14 19:06:26.116271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.894 [2024-07-14 19:06:26.116296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.894 [2024-07-14 19:06:26.116311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.894 [2024-07-14 19:06:26.116323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:37.894 [2024-07-14 19:06:26.116351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:37.894 qpair failed and we were unable to recover it. 00:34:38.153 [2024-07-14 19:06:26.126189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.153 [2024-07-14 19:06:26.126322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.153 [2024-07-14 19:06:26.126347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.153 [2024-07-14 19:06:26.126361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.153 [2024-07-14 19:06:26.126374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.153 [2024-07-14 19:06:26.126401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.153 qpair failed and we were unable to recover it. 00:34:38.153 [2024-07-14 19:06:26.136209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.153 [2024-07-14 19:06:26.136307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.153 [2024-07-14 19:06:26.136332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.153 [2024-07-14 19:06:26.136346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.153 [2024-07-14 19:06:26.136358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.153 [2024-07-14 19:06:26.136385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.153 qpair failed and we were unable to recover it. 00:34:38.153 [2024-07-14 19:06:26.146281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.153 [2024-07-14 19:06:26.146402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.153 [2024-07-14 19:06:26.146428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.153 [2024-07-14 19:06:26.146442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.153 [2024-07-14 19:06:26.146460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.153 [2024-07-14 19:06:26.146488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.153 qpair failed and we were unable to recover it. 00:34:38.153 [2024-07-14 19:06:26.156330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.153 [2024-07-14 19:06:26.156427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.153 [2024-07-14 19:06:26.156452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.153 [2024-07-14 19:06:26.156466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.153 [2024-07-14 19:06:26.156478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.153 [2024-07-14 19:06:26.156508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.153 qpair failed and we were unable to recover it. 00:34:38.153 [2024-07-14 19:06:26.166356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.153 [2024-07-14 19:06:26.166455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.153 [2024-07-14 19:06:26.166480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.153 [2024-07-14 19:06:26.166494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.153 [2024-07-14 19:06:26.166506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.166534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.176345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.176456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.176481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.176495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.176507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.176534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.186480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.186621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.186645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.186659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.186672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.186699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.196409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.196519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.196545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.196559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.196571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.196599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.206558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.206701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.206726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.206739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.206752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.206779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.216466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.216584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.216609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.216623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.216636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.216663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.226475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.226573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.226597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.226611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.226624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.226650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.236483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.236576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.236601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.236615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.236633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.236661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.246507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.246637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.246662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.246676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.246688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.246715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.256573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.256677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.256702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.256716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.256728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.256755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.266563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.266662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.266688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.266702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.266714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.266742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.276655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.276755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.276780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.276795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.276807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.276834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.286650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.286774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.286799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.286812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.286825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.286852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.296705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.296815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.296840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.296855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.296868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.296903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.306710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.154 [2024-07-14 19:06:26.306838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.154 [2024-07-14 19:06:26.306863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.154 [2024-07-14 19:06:26.306883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.154 [2024-07-14 19:06:26.306897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.154 [2024-07-14 19:06:26.306926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.154 qpair failed and we were unable to recover it. 00:34:38.154 [2024-07-14 19:06:26.316721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.316841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.316867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.316888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.316901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.316931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.326754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.326860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.326896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.326916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.326933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.326962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.336798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.336905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.336930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.336945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.336957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.336985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.346846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.346952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.346977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.346992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.347005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.347035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.356835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.356963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.356988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.357003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.357015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.357042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.366867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.366983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.367008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.367022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.367034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.367062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.155 [2024-07-14 19:06:26.376895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.155 [2024-07-14 19:06:26.377003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.155 [2024-07-14 19:06:26.377028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.155 [2024-07-14 19:06:26.377042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.155 [2024-07-14 19:06:26.377054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.155 [2024-07-14 19:06:26.377081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.155 qpair failed and we were unable to recover it. 00:34:38.415 [2024-07-14 19:06:26.386921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.415 [2024-07-14 19:06:26.387023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.415 [2024-07-14 19:06:26.387049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.415 [2024-07-14 19:06:26.387063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.415 [2024-07-14 19:06:26.387075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.415 [2024-07-14 19:06:26.387106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.415 qpair failed and we were unable to recover it. 00:34:38.415 [2024-07-14 19:06:26.396931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.415 [2024-07-14 19:06:26.397029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.415 [2024-07-14 19:06:26.397055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.415 [2024-07-14 19:06:26.397069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.415 [2024-07-14 19:06:26.397081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.415 [2024-07-14 19:06:26.397108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.415 qpair failed and we were unable to recover it. 00:34:38.415 [2024-07-14 19:06:26.407003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.415 [2024-07-14 19:06:26.407153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.415 [2024-07-14 19:06:26.407177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.415 [2024-07-14 19:06:26.407191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.415 [2024-07-14 19:06:26.407203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.415 [2024-07-14 19:06:26.407246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.415 qpair failed and we were unable to recover it. 00:34:38.415 [2024-07-14 19:06:26.417032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.415 [2024-07-14 19:06:26.417135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.415 [2024-07-14 19:06:26.417164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.415 [2024-07-14 19:06:26.417185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.415 [2024-07-14 19:06:26.417199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.417231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.427025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.427125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.427151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.427165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.427177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.427206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.437096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.437230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.437256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.437270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.437282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.437309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.447079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.447201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.447227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.447241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.447253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.447280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.457147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.457256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.457281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.457295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.457308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.457335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.467124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.467219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.467244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.467258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.467270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.467299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.477203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.477303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.477328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.477342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.477354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.477381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.487212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.487331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.487356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.487369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.487381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.487409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.497219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.497321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.497346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.497360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.497372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.497403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.507251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.507382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.507407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.507427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.507440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.507470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.517261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.517357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.517382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.517397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.517409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.517436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.527357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.527485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.527510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.527524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.527536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.527563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.537367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.537499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.537524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.537538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.537549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.537576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.547346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.547464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.547488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.547502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.547515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.547541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.416 qpair failed and we were unable to recover it. 00:34:38.416 [2024-07-14 19:06:26.557479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.416 [2024-07-14 19:06:26.557617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.416 [2024-07-14 19:06:26.557642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.416 [2024-07-14 19:06:26.557656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.416 [2024-07-14 19:06:26.557668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.416 [2024-07-14 19:06:26.557696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.567418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.567524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.567549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.567564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.567576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.567604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.577478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.577595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.577621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.577635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.577647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.577674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.587513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.587612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.587638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.587652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.587664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.587702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.597550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.597676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.597706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.597722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.597737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.597764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.607552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.607679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.607704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.607718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.607730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.607757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.617600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.617699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.617725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.617739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.617751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.617779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.627620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.627734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.627758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.627773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.627785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.627813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.417 [2024-07-14 19:06:26.637671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.417 [2024-07-14 19:06:26.637786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.417 [2024-07-14 19:06:26.637811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.417 [2024-07-14 19:06:26.637826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.417 [2024-07-14 19:06:26.637838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.417 [2024-07-14 19:06:26.637871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.417 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.647677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.647804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.647829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.647843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.647855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.647888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.657709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.657822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.657847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.657861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.657874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.657912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.667706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.667801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.667827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.667841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.667853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.667889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.677731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.677827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.677852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.677866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.677884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.677916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.687763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.687897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.687928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.687943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.687955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.687983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.697808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.676 [2024-07-14 19:06:26.697914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.676 [2024-07-14 19:06:26.697940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.676 [2024-07-14 19:06:26.697953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.676 [2024-07-14 19:06:26.697965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.676 [2024-07-14 19:06:26.697993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.676 qpair failed and we were unable to recover it. 00:34:38.676 [2024-07-14 19:06:26.707855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.707976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.708002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.708016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.708028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.708056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.717901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.718025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.718051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.718065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.718078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.718109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.727957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.728068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.728093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.728107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.728120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.728153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.737948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.738057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.738082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.738097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.738109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.738137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.747966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.748060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.748085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.748098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.748111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.748138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.757989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.758115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.758140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.758155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.758167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.758195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.768035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.768137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.768161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.768175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.768187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.768215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.778045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.778145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.778176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.778190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.778203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.778231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.788086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.788185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.788210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.788224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.788236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.788267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.798105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.798207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.798233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.798248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.798260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.798287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.808159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.808273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.808299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.808313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.808325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.808352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.818178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.818279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.818307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.818322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.818335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.818369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.828206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.828315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.828345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.828360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.828374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.828403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.838213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.838307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.838334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.838348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.677 [2024-07-14 19:06:26.838360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.677 [2024-07-14 19:06:26.838388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.677 qpair failed and we were unable to recover it. 00:34:38.677 [2024-07-14 19:06:26.848268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.677 [2024-07-14 19:06:26.848365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.677 [2024-07-14 19:06:26.848390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.677 [2024-07-14 19:06:26.848404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.848417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.848444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.678 [2024-07-14 19:06:26.858292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.678 [2024-07-14 19:06:26.858391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.678 [2024-07-14 19:06:26.858416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.678 [2024-07-14 19:06:26.858430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.858443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.858473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.678 [2024-07-14 19:06:26.868403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.678 [2024-07-14 19:06:26.868532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.678 [2024-07-14 19:06:26.868562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.678 [2024-07-14 19:06:26.868577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.868589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.868619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.678 [2024-07-14 19:06:26.878339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.678 [2024-07-14 19:06:26.878442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.678 [2024-07-14 19:06:26.878467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.678 [2024-07-14 19:06:26.878481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.878494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.878521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.678 [2024-07-14 19:06:26.888425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.678 [2024-07-14 19:06:26.888525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.678 [2024-07-14 19:06:26.888551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.678 [2024-07-14 19:06:26.888574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.888594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.888625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.678 [2024-07-14 19:06:26.898488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.678 [2024-07-14 19:06:26.898592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.678 [2024-07-14 19:06:26.898617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.678 [2024-07-14 19:06:26.898631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.678 [2024-07-14 19:06:26.898650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.678 [2024-07-14 19:06:26.898678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.678 qpair failed and we were unable to recover it. 00:34:38.939 [2024-07-14 19:06:26.908412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.939 [2024-07-14 19:06:26.908508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.939 [2024-07-14 19:06:26.908533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.939 [2024-07-14 19:06:26.908546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.939 [2024-07-14 19:06:26.908565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.939 [2024-07-14 19:06:26.908593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.939 qpair failed and we were unable to recover it. 00:34:38.939 [2024-07-14 19:06:26.918461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.939 [2024-07-14 19:06:26.918593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.939 [2024-07-14 19:06:26.918619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.939 [2024-07-14 19:06:26.918633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.939 [2024-07-14 19:06:26.918646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.918676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.928505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.928624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.928648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.928662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.928674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.928701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.938492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.938596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.938629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.938643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.938655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.938683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.948663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.948796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.948821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.948836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.948848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.948883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.958557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.958686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.958711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.958726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.958738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.958765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.968607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.968715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.968741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.968754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.968766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.968794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.978625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.978725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.978751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.978765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.978777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.978804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.988646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.988748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.988773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.988787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.988800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.988829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:26.998681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:26.998775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:26.998801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:26.998814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:26.998832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:26.998860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.008734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.008832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.008858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.008872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.008902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.008933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.018740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.018839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.018865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.018887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.018903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.018931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.028737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.028883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.028909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.028923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.028935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.028962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.038781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.038922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.038949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.038963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.038976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.039004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.048810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.048923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.048948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.048962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.048975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.049002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.058861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.940 [2024-07-14 19:06:27.058989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.940 [2024-07-14 19:06:27.059014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.940 [2024-07-14 19:06:27.059028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.940 [2024-07-14 19:06:27.059041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.940 [2024-07-14 19:06:27.059071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.940 qpair failed and we were unable to recover it. 00:34:38.940 [2024-07-14 19:06:27.068872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.068995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.069020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.069034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.069046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.069074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.078931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.079024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.079049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.079063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.079075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.079102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.088934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.089059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.089086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.089105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.089118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.089147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.098970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.099096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.099121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.099136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.099148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.099178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.109001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.109095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.109121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.109135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.109147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.109176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.119046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.119140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.119166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.119180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.119193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.119220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.129134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.129241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.129266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.129280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.129292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.129319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.139127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.139228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.139254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.139268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.139280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.139307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.149104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.149238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.149263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.149277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.149289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.149318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:38.941 [2024-07-14 19:06:27.159172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.941 [2024-07-14 19:06:27.159295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.941 [2024-07-14 19:06:27.159320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.941 [2024-07-14 19:06:27.159334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.941 [2024-07-14 19:06:27.159346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:38.941 [2024-07-14 19:06:27.159374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:38.941 qpair failed and we were unable to recover it. 00:34:39.203 [2024-07-14 19:06:27.169205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.203 [2024-07-14 19:06:27.169306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.203 [2024-07-14 19:06:27.169333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.203 [2024-07-14 19:06:27.169348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.203 [2024-07-14 19:06:27.169363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.203 [2024-07-14 19:06:27.169393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.203 qpair failed and we were unable to recover it. 00:34:39.203 [2024-07-14 19:06:27.179209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.203 [2024-07-14 19:06:27.179310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.203 [2024-07-14 19:06:27.179336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.203 [2024-07-14 19:06:27.179358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.203 [2024-07-14 19:06:27.179371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.203 [2024-07-14 19:06:27.179399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.203 qpair failed and we were unable to recover it. 00:34:39.203 [2024-07-14 19:06:27.189209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.203 [2024-07-14 19:06:27.189304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.203 [2024-07-14 19:06:27.189330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.203 [2024-07-14 19:06:27.189344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.203 [2024-07-14 19:06:27.189356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.189385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.199258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.199357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.199382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.199396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.199408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.199436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.209310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.209424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.209449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.209463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.209475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.209502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.219313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.219413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.219438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.219452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.219467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.219495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.229361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.229463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.229488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.229502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.229514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.229542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.239445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.239535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.239566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.239580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.239592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.239620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.249386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.249490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.249514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.249528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.249541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.249568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.259431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.259527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.259551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.259565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.259577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.259605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.269546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.269640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.269665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.269687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.269714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.269743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.279520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.279618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.279643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.279657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.279672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.279701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.289569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.289664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.289690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.289704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.289717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.289744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.299558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.299659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.299683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.299697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.299709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.299736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.309579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.309677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.309702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.309716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.309728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.309755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.319635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.319764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.319790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.319804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.319816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.319844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.204 qpair failed and we were unable to recover it. 00:34:39.204 [2024-07-14 19:06:27.329634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.204 [2024-07-14 19:06:27.329734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.204 [2024-07-14 19:06:27.329759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.204 [2024-07-14 19:06:27.329772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.204 [2024-07-14 19:06:27.329785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.204 [2024-07-14 19:06:27.329812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.339673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.339786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.339813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.339827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.339845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.339874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.349696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.349795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.349821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.349835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.349848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.349882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.359714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.359853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.359888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.359905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.359917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.359957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.369745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.369862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.369894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.369909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.369921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.369948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.379778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.379883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.379908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.379923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.379935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.379962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.389796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.389905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.389931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.389945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.389957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.389985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.399924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.400024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.400048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.400062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.400074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.400102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.409866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.409978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.410003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.410017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.410030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.410057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.205 [2024-07-14 19:06:27.419903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.205 [2024-07-14 19:06:27.420012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.205 [2024-07-14 19:06:27.420037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.205 [2024-07-14 19:06:27.420051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.205 [2024-07-14 19:06:27.420064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.205 [2024-07-14 19:06:27.420091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.205 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.429998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.430097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.430123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.430137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.430150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.430179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.439940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.440042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.440070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.440086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.440099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.440128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.449981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.450101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.450132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.450148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.450160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.450188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.459988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.460085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.460109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.460123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.460136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.460163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.470045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.470164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.470190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.470205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.470217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.470247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.480050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.480154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.480178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.480193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.480205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.480243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.490100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.490214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.490239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.490253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.490265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.490298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.500148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.500254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.500280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.500294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.500306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.500333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.510169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.510309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.510334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.510348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.510360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.510403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.520209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.520304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.520330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.520344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.520357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.520387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.530212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.530312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.530337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.530350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.530363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.530390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.540234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.540335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.540365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.540380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.540393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.540422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.467 qpair failed and we were unable to recover it. 00:34:39.467 [2024-07-14 19:06:27.550300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.467 [2024-07-14 19:06:27.550400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.467 [2024-07-14 19:06:27.550425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.467 [2024-07-14 19:06:27.550439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.467 [2024-07-14 19:06:27.550451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.467 [2024-07-14 19:06:27.550478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.560293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.560389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.560414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.560428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.560440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.560472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.570386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.570487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.570512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.570526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.570538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.570565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.580384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.580490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.580515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.580529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.580541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.580575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.590373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.590471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.590497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.590512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.590524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.590553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.600421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.600513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.600538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.600552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.600564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.600592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.610430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.610553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.610578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.610592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.610603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.610631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.620472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.620585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.620611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.620624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.620636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.620664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.630580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.630675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.630721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.630735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.630747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.630788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.640512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.640601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.640626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.640639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.640651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.640679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.650555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.650662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.650687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.650701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.650713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.650740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.660632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.660768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.660794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.660808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.660835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.660864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.670698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.670807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.670849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.670867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.670892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.670940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.680640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.680740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.680765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.680780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.680792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.680820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.468 [2024-07-14 19:06:27.690708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.468 [2024-07-14 19:06:27.690826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.468 [2024-07-14 19:06:27.690851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.468 [2024-07-14 19:06:27.690865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.468 [2024-07-14 19:06:27.690884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.468 [2024-07-14 19:06:27.690914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.468 qpair failed and we were unable to recover it. 00:34:39.729 [2024-07-14 19:06:27.700713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.729 [2024-07-14 19:06:27.700821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.729 [2024-07-14 19:06:27.700847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.729 [2024-07-14 19:06:27.700862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.729 [2024-07-14 19:06:27.700874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.729 [2024-07-14 19:06:27.700911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.729 qpair failed and we were unable to recover it. 00:34:39.729 [2024-07-14 19:06:27.710746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.729 [2024-07-14 19:06:27.710896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.729 [2024-07-14 19:06:27.710921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.729 [2024-07-14 19:06:27.710935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.729 [2024-07-14 19:06:27.710947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.710975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.720743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.720845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.720870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.720891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.720904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.720935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.730784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.730894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.730920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.730935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.730947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.730974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.740831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.740946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.740971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.740985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.740997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.741025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.750809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.750922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.750947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.750961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.750974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.751001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.760922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.761041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.761066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.761080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.761098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.761127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.770903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.771009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.771036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.771050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.771063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.771091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.780941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.781041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.781067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.781081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.781093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.781121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.790938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.791034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.791059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.791073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.791085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.791114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.801015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.801115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.801141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.801155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.801167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.801195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.811019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.811125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.811150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.811163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.811176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.811203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.821031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.821153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.821178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.821191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.821204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.821231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.831073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.831206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.831230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.831244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.831256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.831283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.841097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.841202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.841229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.841243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.841256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.841286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.730 [2024-07-14 19:06:27.851134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.730 [2024-07-14 19:06:27.851233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.730 [2024-07-14 19:06:27.851258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.730 [2024-07-14 19:06:27.851272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.730 [2024-07-14 19:06:27.851290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.730 [2024-07-14 19:06:27.851322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.730 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.861171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.861287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.861313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.861327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.861339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.861367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.871191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.871289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.871314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.871329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.871344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.871372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.881229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.881348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.881373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.881388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.881400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.881427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.891236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.891339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.891365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.891379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.891391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.891419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.901299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.901427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.901452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.901466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.901478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.901505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.911302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.911421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.911447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.911462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.911475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.911504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.921351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.921452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.921477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.921491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.921503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.921534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.931360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.931476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.931502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.931516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.931528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.931556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.941414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.941521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.941547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.941567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.941580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.941611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.731 [2024-07-14 19:06:27.951564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.731 [2024-07-14 19:06:27.951690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.731 [2024-07-14 19:06:27.951715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.731 [2024-07-14 19:06:27.951728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.731 [2024-07-14 19:06:27.951741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.731 [2024-07-14 19:06:27.951771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.731 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:27.961519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:27.961628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:27.961653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:27.961667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:27.961680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:27.961707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:27.971533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:27.971637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:27.971663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:27.971678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:27.971690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:27.971717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:27.981581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:27.981726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:27.981751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:27.981764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:27.981776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:27.981804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:27.991547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:27.991645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:27.991670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:27.991684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:27.991697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:27.991727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:28.001616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:28.001710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:28.001735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:28.001749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:28.001761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:28.001792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.991 [2024-07-14 19:06:28.011599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.991 [2024-07-14 19:06:28.011745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.991 [2024-07-14 19:06:28.011770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.991 [2024-07-14 19:06:28.011784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.991 [2024-07-14 19:06:28.011796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.991 [2024-07-14 19:06:28.011838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.991 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.021704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.021815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.021840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.021854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.021866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.021902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.031643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.031772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.031798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.031817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.031832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.031861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.041676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.041790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.041815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.041829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.041841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.041868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.051707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.051847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.051872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.051894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.051907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.051936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.061755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.061859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.061892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.061907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.061919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.061947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.071758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.071885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.071912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.071926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.071938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.071966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.081781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.081874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.081906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.081920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.081932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.081960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.091797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.091903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.091929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.091944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.091956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.091983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.101857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.101966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.101991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.102005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.102017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.102045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.111842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.111949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.111975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.111989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.112001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.112029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.121916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.122019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.122050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.122065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.122077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.122105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.131927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.132032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.132058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.132073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.132086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.132115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.142001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.142119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.142144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.142158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.142170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.142198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.152016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.992 [2024-07-14 19:06:28.152117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.992 [2024-07-14 19:06:28.152142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.992 [2024-07-14 19:06:28.152156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.992 [2024-07-14 19:06:28.152169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.992 [2024-07-14 19:06:28.152198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.992 qpair failed and we were unable to recover it. 00:34:39.992 [2024-07-14 19:06:28.162018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.162107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.162132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.162146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.162158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.162185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:39.993 [2024-07-14 19:06:28.172103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.172200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.172226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.172240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.172252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.172280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:39.993 [2024-07-14 19:06:28.182071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.182196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.182221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.182234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.182247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.182274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:39.993 [2024-07-14 19:06:28.192117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.192219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.192246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.192260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.192272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.192302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:39.993 [2024-07-14 19:06:28.202143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.202234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.202259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.202273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.202285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.202312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:39.993 [2024-07-14 19:06:28.212176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.993 [2024-07-14 19:06:28.212305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.993 [2024-07-14 19:06:28.212335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.993 [2024-07-14 19:06:28.212350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.993 [2024-07-14 19:06:28.212362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:39.993 [2024-07-14 19:06:28.212389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:39.993 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.222192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.222300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.222325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.222339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.222351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.222379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.232201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.232300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.232325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.232339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.232352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.232380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.242257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.242354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.242379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.242393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.242406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.242433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.252268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.252367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.252392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.252407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.252419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.252453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.262348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.262449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.262475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.262489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.262501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.262529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.272321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.272418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.272444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.272458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.272470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.272500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.282362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.282459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.282484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.282498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.282510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.282538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.292403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.292502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.292528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.292542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.292554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.292582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.302436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.302574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.302605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.302620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.302632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.252 [2024-07-14 19:06:28.302677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.252 qpair failed and we were unable to recover it. 00:34:40.252 [2024-07-14 19:06:28.312468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.252 [2024-07-14 19:06:28.312565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.252 [2024-07-14 19:06:28.312590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.252 [2024-07-14 19:06:28.312604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.252 [2024-07-14 19:06:28.312616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.312644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.322473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.322569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.322594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.322608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.322621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.322650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.332507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.332616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.332641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.332654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.332666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.332694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.342522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.342620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.342645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.342658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.342670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.342703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.352545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.352641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.352666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.352680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.352692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.352722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.362596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.362712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.362737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.362750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.362762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.362789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.372639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.372741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.372767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.372781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.372793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.372821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.382670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.382765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.382791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.382805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.382817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.382844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.392668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.392767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.392797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.392813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.392825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.392854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.402768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.402867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.402898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.402913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.402926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.402953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.253 [2024-07-14 19:06:28.412740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.253 [2024-07-14 19:06:28.412840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.253 [2024-07-14 19:06:28.412865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.253 [2024-07-14 19:06:28.412887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.253 [2024-07-14 19:06:28.412902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.253 [2024-07-14 19:06:28.412930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.253 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.422776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.422892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.422919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.422933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.422945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.422973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.432809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.432911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.432936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.432950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.432968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.432997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.442873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.442971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.442997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.443011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.443023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.443054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.452866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.452978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.453003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.453017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.453029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.453057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.462897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.462998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.463024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.463038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.463050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.463077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.254 [2024-07-14 19:06:28.472950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.254 [2024-07-14 19:06:28.473055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.254 [2024-07-14 19:06:28.473081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.254 [2024-07-14 19:06:28.473095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.254 [2024-07-14 19:06:28.473107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.254 [2024-07-14 19:06:28.473138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.254 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.482926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.483023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.483049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.483063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.483075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.483103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.492994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.493111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.493138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.493152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.493168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.493198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.503040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.503145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.503171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.503185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.503197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.503225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.513051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.513158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.513186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.513201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.513217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.513247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.523047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.523141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.523167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.523182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.523200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.523229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.533131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.533238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.533264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.533278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.533290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.533317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.513 qpair failed and we were unable to recover it. 00:34:40.513 [2024-07-14 19:06:28.543146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.513 [2024-07-14 19:06:28.543251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.513 [2024-07-14 19:06:28.543279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.513 [2024-07-14 19:06:28.543294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.513 [2024-07-14 19:06:28.543309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.513 [2024-07-14 19:06:28.543338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.553156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.553258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.553284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.553298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.553313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.553341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.563179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.563291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.563316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.563330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.563342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.563370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.573222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.573330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.573356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.573370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.573382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.573410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.583212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.583312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.583337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.583351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.583363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.583391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.593246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.593337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.593362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.593376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.593388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.593418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.603334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.603454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.603480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.603493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.603506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.603533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.613376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.613476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.613501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.613515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.613533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.613564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.623405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.623511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.623539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.623555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.623569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.623598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.633405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.633507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.633533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.633548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.633560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.633589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.643414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.643509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.643534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.643549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.643561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.643588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.653445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.514 [2024-07-14 19:06:28.653559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.514 [2024-07-14 19:06:28.653584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.514 [2024-07-14 19:06:28.653598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.514 [2024-07-14 19:06:28.653610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.514 [2024-07-14 19:06:28.653638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.514 qpair failed and we were unable to recover it. 00:34:40.514 [2024-07-14 19:06:28.663555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.663653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.663693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.663707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.663719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.663745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.673493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.673588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.673614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.673627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.673639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.673667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.683518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.683618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.683643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.683657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.683669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.683695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.693573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.693706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.693731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.693745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.693757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.693785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.703564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.703665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.703691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.703713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.703726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.703756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.713691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.713785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.713810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.713824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.713839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.713866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.723636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.723728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.723753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.723767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.723779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.723809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.515 [2024-07-14 19:06:28.733758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.515 [2024-07-14 19:06:28.733860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.515 [2024-07-14 19:06:28.733892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.515 [2024-07-14 19:06:28.733907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.515 [2024-07-14 19:06:28.733919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.515 [2024-07-14 19:06:28.733946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.515 qpair failed and we were unable to recover it. 00:34:40.774 [2024-07-14 19:06:28.743772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.774 [2024-07-14 19:06:28.743923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.774 [2024-07-14 19:06:28.743949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.774 [2024-07-14 19:06:28.743964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.774 [2024-07-14 19:06:28.743976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.774 [2024-07-14 19:06:28.744004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.774 qpair failed and we were unable to recover it. 00:34:40.774 [2024-07-14 19:06:28.753777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.774 [2024-07-14 19:06:28.753898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.774 [2024-07-14 19:06:28.753924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.774 [2024-07-14 19:06:28.753938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.774 [2024-07-14 19:06:28.753950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.774 [2024-07-14 19:06:28.753978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.774 qpair failed and we were unable to recover it. 00:34:40.774 [2024-07-14 19:06:28.763774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.774 [2024-07-14 19:06:28.763866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.774 [2024-07-14 19:06:28.763899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.774 [2024-07-14 19:06:28.763914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.774 [2024-07-14 19:06:28.763926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.774 [2024-07-14 19:06:28.763953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.774 qpair failed and we were unable to recover it. 00:34:40.774 [2024-07-14 19:06:28.773858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.774 [2024-07-14 19:06:28.773996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.774 [2024-07-14 19:06:28.774021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.774 [2024-07-14 19:06:28.774034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.774 [2024-07-14 19:06:28.774046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.774 [2024-07-14 19:06:28.774073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.774 qpair failed and we were unable to recover it. 00:34:40.774 [2024-07-14 19:06:28.783836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.774 [2024-07-14 19:06:28.783957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.774 [2024-07-14 19:06:28.783982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.783996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.784009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.784036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.793844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.793951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.793976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.793996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.794009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.794038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.803912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.804014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.804039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.804053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.804065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.804092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.813914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.814032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.814057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.814071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.814082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.814110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.823947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.824049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.824074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.824088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.824100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.824128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.833994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.834088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.834113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.834127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.834139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.834169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.843986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.844090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.844115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.844130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.844142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.844169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.854176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.854304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.854327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.854341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.854352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.854380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.864073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.864170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.864195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.864209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.864221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.864250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.874128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.874266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.874291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.775 [2024-07-14 19:06:28.874305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.775 [2024-07-14 19:06:28.874317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.775 [2024-07-14 19:06:28.874345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.775 qpair failed and we were unable to recover it. 00:34:40.775 [2024-07-14 19:06:28.884103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.775 [2024-07-14 19:06:28.884198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.775 [2024-07-14 19:06:28.884223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.884241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.884255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.884283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.894165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.894307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.894332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.894346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.894358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.894386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.904178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.904281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.904307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.904321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.904333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.904360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.914188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.914285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.914311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.914324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.914337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.914364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.924236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.924325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.924350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.924364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.924376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.924404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.934270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.934389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.934414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.934429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.934441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.934469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.944292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.944393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.944418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.944432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.944444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.944474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.954334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.954433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.954459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.954473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.954485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.954515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.964334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.964429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.964455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.964469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.964481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.964508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.974398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.974503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.974533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.974547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.974560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.974587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.984408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.776 [2024-07-14 19:06:28.984511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.776 [2024-07-14 19:06:28.984536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.776 [2024-07-14 19:06:28.984552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.776 [2024-07-14 19:06:28.984565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.776 [2024-07-14 19:06:28.984593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.776 qpair failed and we were unable to recover it. 00:34:40.776 [2024-07-14 19:06:28.994447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.777 [2024-07-14 19:06:28.994544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.777 [2024-07-14 19:06:28.994570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.777 [2024-07-14 19:06:28.994585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.777 [2024-07-14 19:06:28.994597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:40.777 [2024-07-14 19:06:28.994629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:40.777 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.004456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.004552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.004578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.004592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.004604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.004632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.014492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.014603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.014628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.014642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.014655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.014688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.024550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.024691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.024716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.024730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.024742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.024784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.034540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.034663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.034691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.034705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.034718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.034747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.044584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.044676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.044702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.044716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.044728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.044756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.054594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.054696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.054722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.054737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.054749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.054776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.064631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.064731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.064762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.064777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.064791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.064819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.074696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.074807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.074833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.074848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.074860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.074894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.084717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.084812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.084839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.084853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.084866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.084903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.094716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.094833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.094858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.094872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.094891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.094920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.104760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.104856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.104886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.104902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.104914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.104947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.114841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.114990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.115015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.115029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.115042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.115071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.124803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.124905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.124931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.124945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.124957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.124985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.036 qpair failed and we were unable to recover it. 00:34:41.036 [2024-07-14 19:06:29.134809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.036 [2024-07-14 19:06:29.134915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.036 [2024-07-14 19:06:29.134941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.036 [2024-07-14 19:06:29.134955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.036 [2024-07-14 19:06:29.134967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.036 [2024-07-14 19:06:29.134994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.144893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.145007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.145032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.145047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.145059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.145089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.154918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.155015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.155045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.155060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.155075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.155103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.165002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.165104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.165130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.165144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.165156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.165183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.174939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.175041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.175066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.175080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.175092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.175119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.184996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.185093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.185117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.185131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.185144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.185171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.195023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.195129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.195155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.195169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.195181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.195214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.205027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.205120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.205145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.205160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.205183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.205210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.215063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.215162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.215187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.215201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.215213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.215240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.225087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.225187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.225212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.225226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.225238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.225265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.235154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.235269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.235295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.235309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.235321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.235348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.245187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.245296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.245327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.245342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.245354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.245384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.037 [2024-07-14 19:06:29.255166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.037 [2024-07-14 19:06:29.255280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.037 [2024-07-14 19:06:29.255305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.037 [2024-07-14 19:06:29.255319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.037 [2024-07-14 19:06:29.255330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.037 [2024-07-14 19:06:29.255357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.037 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.265247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.265384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.265409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.265423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.265435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.265479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.275211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.275340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.275365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.275379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.275391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.275418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.285365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.285462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.285487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.285502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.285520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.285548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.295323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.295442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.295468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.295482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.295494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.295521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.305313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.305413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.305438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.305452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.305464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.305491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.315345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.315441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.315465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.315479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.315492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.315520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.325349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.325444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.325469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.325483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.325495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.325522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.335406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.335510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.335535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.335549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.335560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.335587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.345464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.345560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.345585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.345599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.345611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.345638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.355493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.355602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.299 [2024-07-14 19:06:29.355627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.299 [2024-07-14 19:06:29.355641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.299 [2024-07-14 19:06:29.355653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.299 [2024-07-14 19:06:29.355682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.299 qpair failed and we were unable to recover it. 00:34:41.299 [2024-07-14 19:06:29.365460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.299 [2024-07-14 19:06:29.365554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.365579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.365593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.365605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.365632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.375500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.375602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.375627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.375641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.375658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.375686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.385597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.385736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.385761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.385775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.385787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.385814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.395549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.395644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.395670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.395684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.395696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.395723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.405580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.405670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.405696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.405710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.405722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.405749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.415617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.415719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.415743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.415757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.415769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.415797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.425635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.425738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.425763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.425778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.425790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.425817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.435690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.435822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.435847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.435861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.435873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.435911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.445692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.445790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.445815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.445829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.445841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.445868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.455770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.455874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.455905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.455919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.455931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.455959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.465787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.465895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.465921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.465940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.465954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.465982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.475760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.475905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.475930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.475944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.475956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.475984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.485815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.485921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.485947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.485961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.485973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.486001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.495832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.495941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.300 [2024-07-14 19:06:29.495966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.300 [2024-07-14 19:06:29.495980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.300 [2024-07-14 19:06:29.495993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.300 [2024-07-14 19:06:29.496021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.300 qpair failed and we were unable to recover it. 00:34:41.300 [2024-07-14 19:06:29.505925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.300 [2024-07-14 19:06:29.506033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.301 [2024-07-14 19:06:29.506058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.301 [2024-07-14 19:06:29.506072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.301 [2024-07-14 19:06:29.506084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:41.301 [2024-07-14 19:06:29.506114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:41.301 qpair failed and we were unable to recover it. 00:34:41.301 [2024-07-14 19:06:29.515980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.301 [2024-07-14 19:06:29.516081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.301 [2024-07-14 19:06:29.516115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.301 [2024-07-14 19:06:29.516132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.301 [2024-07-14 19:06:29.516145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.301 [2024-07-14 19:06:29.516176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.301 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.525942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.526050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.526077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.526092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.526107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.526137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.535997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.536105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.536131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.536145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.536157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.536186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.546099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.546204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.546231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.546245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.546257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.546289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.556101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.556200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.556226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.556248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.556261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.556292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.566050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.566144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.566170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.566184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.566196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.566228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.576097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.576198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.576225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.576239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.576251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.576281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.586169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.586270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.586296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.586311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.586323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.586354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.596146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.596242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.596271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.596285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.596298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.596328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.606178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.606275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.606301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.606315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.606327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.606358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.616181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.616282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.616307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.616321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.616333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.616362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.626224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.563 [2024-07-14 19:06:29.626323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.563 [2024-07-14 19:06:29.626348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.563 [2024-07-14 19:06:29.626362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.563 [2024-07-14 19:06:29.626375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.563 [2024-07-14 19:06:29.626405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.563 qpair failed and we were unable to recover it. 00:34:41.563 [2024-07-14 19:06:29.636377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.636504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.636530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.636544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.636556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.636586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.646303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.646405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.646436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.646451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.646464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.646505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.656311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.656455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.656481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.656495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.656508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.656552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.666370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.666523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.666548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.666562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.666574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.666616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.676340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.676437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.676462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.676476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.676489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.676518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.686429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.686548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.686573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.686587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.686599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.686635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.696434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.696535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.696561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.696575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.696587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.696616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.706461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.706598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.706622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.706636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.706648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.706677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.716459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.716556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.716581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.716595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.716610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.716639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.726553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.726699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.726725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.726739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.726751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.726808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.736508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.736611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.736642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.736657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.736668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.736697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.746587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.746738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.746763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.746777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.746789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.746820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.756587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.756693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.756721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.756736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.756750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.756793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.766585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.766674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.766700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.564 [2024-07-14 19:06:29.766714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.564 [2024-07-14 19:06:29.766726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.564 [2024-07-14 19:06:29.766756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.564 qpair failed and we were unable to recover it. 00:34:41.564 [2024-07-14 19:06:29.776627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.564 [2024-07-14 19:06:29.776727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.564 [2024-07-14 19:06:29.776752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.565 [2024-07-14 19:06:29.776765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.565 [2024-07-14 19:06:29.776776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.565 [2024-07-14 19:06:29.776811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.565 qpair failed and we were unable to recover it. 00:34:41.565 [2024-07-14 19:06:29.786691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.565 [2024-07-14 19:06:29.786805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.565 [2024-07-14 19:06:29.786831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.565 [2024-07-14 19:06:29.786845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.565 [2024-07-14 19:06:29.786857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.565 [2024-07-14 19:06:29.786894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.565 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.796692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.825 [2024-07-14 19:06:29.796792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.825 [2024-07-14 19:06:29.796818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.825 [2024-07-14 19:06:29.796832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.825 [2024-07-14 19:06:29.796844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.825 [2024-07-14 19:06:29.796883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.825 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.806777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.825 [2024-07-14 19:06:29.806891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.825 [2024-07-14 19:06:29.806917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.825 [2024-07-14 19:06:29.806931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.825 [2024-07-14 19:06:29.806943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.825 [2024-07-14 19:06:29.806972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.825 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.816731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.825 [2024-07-14 19:06:29.816829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.825 [2024-07-14 19:06:29.816854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.825 [2024-07-14 19:06:29.816868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.825 [2024-07-14 19:06:29.816887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.825 [2024-07-14 19:06:29.816928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.825 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.826806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.825 [2024-07-14 19:06:29.826965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.825 [2024-07-14 19:06:29.826991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.825 [2024-07-14 19:06:29.827005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.825 [2024-07-14 19:06:29.827017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.825 [2024-07-14 19:06:29.827045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.825 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.836812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.825 [2024-07-14 19:06:29.836916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.825 [2024-07-14 19:06:29.836942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.825 [2024-07-14 19:06:29.836957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.825 [2024-07-14 19:06:29.836969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.825 [2024-07-14 19:06:29.836998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.825 qpair failed and we were unable to recover it. 00:34:41.825 [2024-07-14 19:06:29.846857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.847007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.847035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.847051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.847066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.847096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.856860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.857005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.857031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.857045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.857057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.857086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.866894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.866996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.867021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.867034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.867052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.867082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.876910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.877052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.877078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.877092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.877104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.877134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.886957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.887069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.887095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.887109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.887121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.887150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.896992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.897104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.897132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.897146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.897158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.897187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.907021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.907119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.907145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.907159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.907171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.907202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.917105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.917205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.917231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.917245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.917257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.917287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.927055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.927148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.927173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.927187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.927200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.927230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.937097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.937230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.937255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.937268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.937280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.937309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.947123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.947224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.947249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.947263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.947275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.947307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.957155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.957254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.957279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.957299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.957312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.957342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.967189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.967289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.967315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.967331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.967346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.967375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.977226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.977326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.977352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.826 [2024-07-14 19:06:29.977366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.826 [2024-07-14 19:06:29.977378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.826 [2024-07-14 19:06:29.977406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.826 qpair failed and we were unable to recover it. 00:34:41.826 [2024-07-14 19:06:29.987258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.826 [2024-07-14 19:06:29.987365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.826 [2024-07-14 19:06:29.987390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:29.987403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:29.987415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:29.987447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:29.997281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:29.997377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:29.997402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:29.997416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:29.997428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:29.997457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:30.007328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:30.007460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:30.007486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:30.007500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:30.007512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:30.007543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:30.017319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:30.017418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:30.017444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:30.017458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:30.017470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:30.017499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:30.027384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:30.027486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:30.027511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:30.027525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:30.027537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:30.027567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:30.037422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:30.037529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:30.037555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:30.037569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:30.037585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:30.037614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:41.827 [2024-07-14 19:06:30.047401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.827 [2024-07-14 19:06:30.047504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.827 [2024-07-14 19:06:30.047536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.827 [2024-07-14 19:06:30.047551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.827 [2024-07-14 19:06:30.047567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:41.827 [2024-07-14 19:06:30.047597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:41.827 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.057446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.057560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.057586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.057599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.057611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.057640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.067485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.067588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.067614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.067628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.067640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.067671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.077580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.077678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.077703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.077717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.077729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.077759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.087590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.087702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.087730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.087745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.087757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.087796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.097605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.097712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.097738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.097753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.097765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.097795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.087 [2024-07-14 19:06:30.107662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.087 [2024-07-14 19:06:30.107783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.087 [2024-07-14 19:06:30.107810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.087 [2024-07-14 19:06:30.107825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.087 [2024-07-14 19:06:30.107838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.087 [2024-07-14 19:06:30.107871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.087 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.117626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.117721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.117747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.117761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.117773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.117803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.127661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.127758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.127784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.127798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.127810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.127842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.137694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.137805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.137836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.137851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.137864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.137902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.147691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.147788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.147814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.147828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.147840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.147870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.157734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.157832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.157858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.157872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.157899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.157930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.167892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.167991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.168016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.168030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.168042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.168075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.177885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.178014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.178040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.178054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.178066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.178104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.187828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.187936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.187962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.187976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.187988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.188017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.197832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.197934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.197959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.197974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.197986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.198015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.207903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.208044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.208069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.208083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.208095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.208124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.217931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.218053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.218078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.218092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.218104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.218133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.227952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.228053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.228083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.228098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.228111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.228140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.238051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.238146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.238171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.238186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.238200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.238229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.248052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.248193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.248218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.088 [2024-07-14 19:06:30.248232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.088 [2024-07-14 19:06:30.248245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.088 [2024-07-14 19:06:30.248290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.088 qpair failed and we were unable to recover it. 00:34:42.088 [2024-07-14 19:06:30.258026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.088 [2024-07-14 19:06:30.258135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.088 [2024-07-14 19:06:30.258160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.258174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.258186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.258214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-14 19:06:30.268089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.089 [2024-07-14 19:06:30.268206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.089 [2024-07-14 19:06:30.268232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.268246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.268268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.268299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-14 19:06:30.278075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.089 [2024-07-14 19:06:30.278187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.089 [2024-07-14 19:06:30.278216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.278230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.278242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.278274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-14 19:06:30.288124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.089 [2024-07-14 19:06:30.288215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.089 [2024-07-14 19:06:30.288240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.288254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.288266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.288295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-14 19:06:30.298165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.089 [2024-07-14 19:06:30.298263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.089 [2024-07-14 19:06:30.298288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.298302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.298314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.298343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.089 [2024-07-14 19:06:30.308164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.089 [2024-07-14 19:06:30.308268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.089 [2024-07-14 19:06:30.308294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.089 [2024-07-14 19:06:30.308308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.089 [2024-07-14 19:06:30.308320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8a60000b90 00:34:42.089 [2024-07-14 19:06:30.308351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:42.089 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.318210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.318323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.318357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.318373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.318388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.318418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.328256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.328361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.328387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.328402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.328418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.328446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.338228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.338333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.338360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.338374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.338386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.338414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.348267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.348368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.348394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.348408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.348424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.348452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.358285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.358384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.358410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.358431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.358444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.358475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.368348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.368443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.368469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.368483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.368495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.368523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.378340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.378435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.378461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.378476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.348 [2024-07-14 19:06:30.378488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.348 [2024-07-14 19:06:30.378516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.348 qpair failed and we were unable to recover it. 00:34:42.348 [2024-07-14 19:06:30.388458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.348 [2024-07-14 19:06:30.388608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.348 [2024-07-14 19:06:30.388637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.348 [2024-07-14 19:06:30.388652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.388664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.388708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.398436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.398536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.398563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.398578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.398590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.398621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.408456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.408557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.408583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.408597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.408609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.408637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.418476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.418577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.418604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.418618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.418631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.418658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.428547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.428646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.428671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.428685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.428697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.428725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.438557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.438662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.438688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.438702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.438714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.438744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.448534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.448653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.448678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.448700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.448715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.448742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.458551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.458674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.458699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.458713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.458726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.458753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.468602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.468737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.468762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.468777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.468789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.468819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.478659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.478771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.478796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.478811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.478823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.478852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.488646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.488743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.488768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.488782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.488794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.488823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.498710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.498814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.498839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.498853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.498866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.498900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.508705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.508801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.508825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.508840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.508852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.508886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.518736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.518830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.518855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.518870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.518891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.349 [2024-07-14 19:06:30.518920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.349 qpair failed and we were unable to recover it. 00:34:42.349 [2024-07-14 19:06:30.528755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.349 [2024-07-14 19:06:30.528853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.349 [2024-07-14 19:06:30.528886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.349 [2024-07-14 19:06:30.528902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.349 [2024-07-14 19:06:30.528917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.350 [2024-07-14 19:06:30.528945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.350 qpair failed and we were unable to recover it. 00:34:42.350 [2024-07-14 19:06:30.538797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.350 [2024-07-14 19:06:30.538924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.350 [2024-07-14 19:06:30.538950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.350 [2024-07-14 19:06:30.538970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.350 [2024-07-14 19:06:30.538983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.350 [2024-07-14 19:06:30.539011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.350 qpair failed and we were unable to recover it. 00:34:42.350 [2024-07-14 19:06:30.548835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.350 [2024-07-14 19:06:30.548959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.350 [2024-07-14 19:06:30.548985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.350 [2024-07-14 19:06:30.548999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.350 [2024-07-14 19:06:30.549011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.350 [2024-07-14 19:06:30.549039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.350 qpair failed and we were unable to recover it. 00:34:42.350 [2024-07-14 19:06:30.558942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.350 [2024-07-14 19:06:30.559047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.350 [2024-07-14 19:06:30.559072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.350 [2024-07-14 19:06:30.559086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.350 [2024-07-14 19:06:30.559098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.350 [2024-07-14 19:06:30.559126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.350 qpair failed and we were unable to recover it. 00:34:42.350 [2024-07-14 19:06:30.568945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.350 [2024-07-14 19:06:30.569097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.350 [2024-07-14 19:06:30.569123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.350 [2024-07-14 19:06:30.569137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.350 [2024-07-14 19:06:30.569149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.350 [2024-07-14 19:06:30.569177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.350 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.578977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.608 [2024-07-14 19:06:30.579079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.608 [2024-07-14 19:06:30.579105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.608 [2024-07-14 19:06:30.579119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.608 [2024-07-14 19:06:30.579131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.608 [2024-07-14 19:06:30.579158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.608 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.588990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.608 [2024-07-14 19:06:30.589090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.608 [2024-07-14 19:06:30.589115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.608 [2024-07-14 19:06:30.589129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.608 [2024-07-14 19:06:30.589142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.608 [2024-07-14 19:06:30.589169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.608 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.598960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.608 [2024-07-14 19:06:30.599058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.608 [2024-07-14 19:06:30.599084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.608 [2024-07-14 19:06:30.599099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.608 [2024-07-14 19:06:30.599112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.608 [2024-07-14 19:06:30.599139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.608 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.608998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.608 [2024-07-14 19:06:30.609086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.608 [2024-07-14 19:06:30.609110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.608 [2024-07-14 19:06:30.609124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.608 [2024-07-14 19:06:30.609136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.608 [2024-07-14 19:06:30.609167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.608 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.619025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.608 [2024-07-14 19:06:30.619166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.608 [2024-07-14 19:06:30.619192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.608 [2024-07-14 19:06:30.619206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.608 [2024-07-14 19:06:30.619218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.608 [2024-07-14 19:06:30.619245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.608 qpair failed and we were unable to recover it. 00:34:42.608 [2024-07-14 19:06:30.629066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.629168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.629198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.629214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.629226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.629253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.639206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.639338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.639382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.639396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.639409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.639452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.649114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.649204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.649229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.649243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.649256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.649283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.659134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.659241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.659267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.659281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.659293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.659320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.669165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.669265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.669290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.669304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.669316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.669344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.679209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.679307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.679333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.679347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.679362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.679390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.689215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.689343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.689368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.689381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.689394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.689425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.699271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.699397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.699423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.699437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.699449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.699476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.709272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.709369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.709394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.709408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.709420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.709447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.719383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.719490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.719521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.719536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.719549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.719577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.729361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.729456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.729481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.729495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.729507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.729535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.739428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.739527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.739552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.739566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.739577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.739604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.749400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.749496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.749521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.749535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.749547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.749576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.759406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.759501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.759525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.759539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.609 [2024-07-14 19:06:30.759551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.609 [2024-07-14 19:06:30.759584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.609 qpair failed and we were unable to recover it. 00:34:42.609 [2024-07-14 19:06:30.769473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.609 [2024-07-14 19:06:30.769567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.609 [2024-07-14 19:06:30.769593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.609 [2024-07-14 19:06:30.769607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.769619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.769646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.779475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.779577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.779601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.779614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.779626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.779653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.789507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.789629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.789653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.789667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.789679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.789708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.799527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.799626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.799651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.799665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.799677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.799704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.809562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.809660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.809690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.809705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.809717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.809745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.819606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.819718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.819743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.819757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.819769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.819796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.610 [2024-07-14 19:06:30.829627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.610 [2024-07-14 19:06:30.829724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.610 [2024-07-14 19:06:30.829749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.610 [2024-07-14 19:06:30.829764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.610 [2024-07-14 19:06:30.829776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.610 [2024-07-14 19:06:30.829803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.610 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.839657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.839752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.839778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.839792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.839804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.839835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.849682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.849780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.849805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.849819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.849831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.849864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.859691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.859801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.859826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.859840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.859852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.859886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.869742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.869841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.869867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.869887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.869901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.869931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.879746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.879843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.879868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.879889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.879902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.879930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.889828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.889934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.889959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.889973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.889985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.890012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.899860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.899983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.900013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.900028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.900040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.900067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.909853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.909966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.909991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.910005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.910017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.910045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.919867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.919989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.920014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.920028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.920040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.920071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.868 [2024-07-14 19:06:30.929918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.868 [2024-07-14 19:06:30.930010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.868 [2024-07-14 19:06:30.930036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.868 [2024-07-14 19:06:30.930050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.868 [2024-07-14 19:06:30.930062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.868 [2024-07-14 19:06:30.930089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.868 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.939940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.940037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.940062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.940076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.940093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.940124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.949972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.950078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.950103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.950117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.950129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.950157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.959982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.960084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.960109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.960124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.960136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.960164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.969995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.970132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.970157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.970171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.970183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.970211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.980140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.980242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.980267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.980281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.980293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.980320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.990085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:42.869 [2024-07-14 19:06:30.990189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:42.869 [2024-07-14 19:06:30.990214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:42.869 [2024-07-14 19:06:30.990228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:42.869 [2024-07-14 19:06:30.990240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13aff20 00:34:42.869 [2024-07-14 19:06:30.990268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.869 qpair failed and we were unable to recover it. 00:34:42.869 [2024-07-14 19:06:30.990393] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:42.869 A controller has encountered a failure and is being reset. 00:34:42.869 Controller properly reset. 00:34:42.869 Initializing NVMe Controllers 00:34:42.869 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:42.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:42.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:42.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:42.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:42.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:42.869 Initialization complete. Launching workers. 00:34:42.869 Starting thread on core 1 00:34:42.869 Starting thread on core 2 00:34:42.869 Starting thread on core 3 00:34:42.869 Starting thread on core 0 00:34:42.869 19:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:42.869 00:34:42.869 real 0m10.807s 00:34:42.869 user 0m18.425s 00:34:42.869 sys 0m5.317s 00:34:42.869 19:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:42.869 19:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:42.869 ************************************ 00:34:42.869 END TEST nvmf_target_disconnect_tc2 00:34:42.869 ************************************ 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:43.126 rmmod nvme_tcp 00:34:43.126 rmmod nvme_fabrics 00:34:43.126 rmmod nvme_keyring 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3755879 ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3755879 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3755879 ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3755879 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3755879 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3755879' 00:34:43.126 killing process with pid 3755879 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3755879 00:34:43.126 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3755879 00:34:43.384 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:43.385 19:06:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.287 19:06:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:45.287 00:34:45.287 real 0m15.713s 00:34:45.287 user 0m44.729s 00:34:45.287 sys 0m7.380s 00:34:45.287 19:06:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:45.287 19:06:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:45.287 ************************************ 00:34:45.287 END TEST nvmf_target_disconnect 00:34:45.287 ************************************ 00:34:45.287 19:06:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:45.287 19:06:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:45.287 19:06:33 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.287 19:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.287 19:06:33 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:45.287 00:34:45.287 real 27m9.826s 00:34:45.287 user 74m20.234s 00:34:45.287 sys 6m16.577s 00:34:45.287 19:06:33 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:45.287 19:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.287 ************************************ 00:34:45.287 END TEST nvmf_tcp 00:34:45.287 ************************************ 00:34:45.545 19:06:33 -- common/autotest_common.sh@1142 -- # return 0 00:34:45.545 19:06:33 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:45.545 19:06:33 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.545 19:06:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:45.545 19:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:45.545 19:06:33 -- common/autotest_common.sh@10 -- # set +x 00:34:45.545 ************************************ 00:34:45.545 START TEST spdkcli_nvmf_tcp 00:34:45.545 ************************************ 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:45.545 * Looking for test storage... 00:34:45.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.545 19:06:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3756984 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3756984 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3756984 ']' 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:45.546 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.546 [2024-07-14 19:06:33.656574] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:45.546 [2024-07-14 19:06:33.656670] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756984 ] 00:34:45.546 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.546 [2024-07-14 19:06:33.722159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.803 [2024-07-14 19:06:33.817098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.803 [2024-07-14 19:06:33.817103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.803 19:06:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:45.803 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:45.803 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:45.803 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:45.803 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:45.803 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:45.803 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:45.803 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.803 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.803 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:45.803 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:45.803 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:45.803 ' 00:34:48.338 [2024-07-14 19:06:36.490467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:49.716 [2024-07-14 19:06:37.714759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:52.251 [2024-07-14 19:06:39.985953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:54.179 [2024-07-14 19:06:41.952210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:55.557 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:55.557 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:55.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.557 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:55.557 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:55.557 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:55.557 19:06:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:55.815 19:06:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:56.073 19:06:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:56.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:56.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:56.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:56.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:56.073 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:56.073 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:56.073 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:56.073 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:56.073 ' 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:01.348 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:01.348 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:01.348 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:01.348 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3756984 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3756984 ']' 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3756984 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3756984 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3756984' 00:35:01.348 killing process with pid 3756984 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3756984 00:35:01.348 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3756984 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3756984 ']' 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3756984 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3756984 ']' 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3756984 00:35:01.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3756984) - No such process 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3756984 is not found' 00:35:01.608 Process with pid 3756984 is not found 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:01.608 00:35:01.608 real 0m16.048s 00:35:01.608 user 0m33.984s 00:35:01.608 sys 0m0.795s 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:01.608 19:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.608 ************************************ 00:35:01.608 END TEST spdkcli_nvmf_tcp 00:35:01.608 ************************************ 00:35:01.608 19:06:49 -- common/autotest_common.sh@1142 -- # return 0 00:35:01.608 19:06:49 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:01.608 19:06:49 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:01.608 19:06:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:01.608 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:35:01.608 ************************************ 00:35:01.608 START TEST nvmf_identify_passthru 00:35:01.608 ************************************ 00:35:01.608 19:06:49 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:01.608 * Looking for test storage... 00:35:01.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:01.608 19:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.608 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:01.608 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.608 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:01.609 19:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.609 19:06:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:01.609 19:06:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.609 19:06:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.609 19:06:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.609 19:06:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:01.609 19:06:49 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:35:01.609 19:06:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:03.569 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:03.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:03.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:03.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:03.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:03.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:35:03.570 00:35:03.570 --- 10.0.0.2 ping statistics --- 00:35:03.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.570 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:35:03.570 00:35:03.570 --- 10.0.0.1 ping statistics --- 00:35:03.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.570 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:03.570 19:06:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:03.570 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.570 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:03.570 19:06:51 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:03.829 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:03.830 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:03.830 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:03.830 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:03.830 19:06:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:03.830 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.037 19:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:08.037 19:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:08.037 19:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:08.037 19:06:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:08.037 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3761596 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:12.230 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3761596 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3761596 ']' 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:12.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:12.230 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.230 [2024-07-14 19:07:00.319112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:12.230 [2024-07-14 19:07:00.319219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:12.230 EAL: No free 2048 kB hugepages reported on node 1 00:35:12.230 [2024-07-14 19:07:00.384806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:12.489 [2024-07-14 19:07:00.475827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:12.489 [2024-07-14 19:07:00.475905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:12.489 [2024-07-14 19:07:00.475920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:12.489 [2024-07-14 19:07:00.475938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:12.489 [2024-07-14 19:07:00.475961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:12.489 [2024-07-14 19:07:00.476028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.489 [2024-07-14 19:07:00.476087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.489 [2024-07-14 19:07:00.476153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:12.489 [2024-07-14 19:07:00.476155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:12.489 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.489 INFO: Log level set to 20 00:35:12.489 INFO: Requests: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "method": "nvmf_set_config", 00:35:12.489 "id": 1, 00:35:12.489 "params": { 00:35:12.489 "admin_cmd_passthru": { 00:35:12.489 "identify_ctrlr": true 00:35:12.489 } 00:35:12.489 } 00:35:12.489 } 00:35:12.489 00:35:12.489 INFO: response: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "id": 1, 00:35:12.489 "result": true 00:35:12.489 } 00:35:12.489 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.489 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.489 INFO: Setting log level to 20 00:35:12.489 INFO: Setting log level to 20 00:35:12.489 INFO: Log level set to 20 00:35:12.489 INFO: Log level set to 20 00:35:12.489 INFO: Requests: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "method": "framework_start_init", 00:35:12.489 "id": 1 00:35:12.489 } 00:35:12.489 00:35:12.489 INFO: Requests: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "method": "framework_start_init", 00:35:12.489 "id": 1 00:35:12.489 } 00:35:12.489 00:35:12.489 [2024-07-14 19:07:00.639237] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:12.489 INFO: response: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "id": 1, 00:35:12.489 "result": true 00:35:12.489 } 00:35:12.489 00:35:12.489 INFO: response: 00:35:12.489 { 00:35:12.489 "jsonrpc": "2.0", 00:35:12.489 "id": 1, 00:35:12.489 "result": true 00:35:12.489 } 00:35:12.489 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.489 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.489 INFO: Setting log level to 40 00:35:12.489 INFO: Setting log level to 40 00:35:12.489 INFO: Setting log level to 40 00:35:12.489 [2024-07-14 19:07:00.649367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.489 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:12.489 19:07:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.489 19:07:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.779 Nvme0n1 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.779 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.779 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.779 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.779 [2024-07-14 19:07:03.541559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.779 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.779 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.779 [ 00:35:15.779 { 00:35:15.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:15.779 "subtype": "Discovery", 00:35:15.779 "listen_addresses": [], 00:35:15.779 "allow_any_host": true, 00:35:15.779 "hosts": [] 00:35:15.779 }, 00:35:15.779 { 00:35:15.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.779 "subtype": "NVMe", 00:35:15.780 "listen_addresses": [ 00:35:15.780 { 00:35:15.780 "trtype": "TCP", 00:35:15.780 "adrfam": "IPv4", 00:35:15.780 "traddr": "10.0.0.2", 00:35:15.780 "trsvcid": "4420" 00:35:15.780 } 00:35:15.780 ], 00:35:15.780 "allow_any_host": true, 00:35:15.780 "hosts": [], 00:35:15.780 "serial_number": "SPDK00000000000001", 00:35:15.780 "model_number": "SPDK bdev Controller", 00:35:15.780 "max_namespaces": 1, 00:35:15.780 "min_cntlid": 1, 00:35:15.780 "max_cntlid": 65519, 00:35:15.780 "namespaces": [ 00:35:15.780 { 00:35:15.780 "nsid": 1, 00:35:15.780 "bdev_name": "Nvme0n1", 00:35:15.780 "name": "Nvme0n1", 00:35:15.780 "nguid": "3FF19F499C0C4FFDA4E0926FD67D38C5", 00:35:15.780 "uuid": "3ff19f49-9c0c-4ffd-a4e0-926fd67d38c5" 00:35:15.780 } 00:35:15.780 ] 00:35:15.780 } 00:35:15.780 ] 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:15.780 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:15.780 19:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:15.780 rmmod nvme_tcp 00:35:15.780 rmmod nvme_fabrics 00:35:15.780 rmmod nvme_keyring 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3761596 ']' 00:35:15.780 19:07:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3761596 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3761596 ']' 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3761596 00:35:15.780 19:07:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:15.780 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:15.780 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3761596 00:35:16.038 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:16.038 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:16.038 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3761596' 00:35:16.038 killing process with pid 3761596 00:35:16.038 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3761596 00:35:16.038 19:07:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3761596 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:17.414 19:07:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.414 19:07:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.414 19:07:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.951 19:07:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:19.951 00:35:19.951 real 0m17.979s 00:35:19.951 user 0m26.791s 00:35:19.951 sys 0m2.269s 00:35:19.951 19:07:07 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:19.951 19:07:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:19.951 ************************************ 00:35:19.951 END TEST nvmf_identify_passthru 00:35:19.951 ************************************ 00:35:19.951 19:07:07 -- common/autotest_common.sh@1142 -- # return 0 00:35:19.951 19:07:07 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.951 19:07:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:19.951 19:07:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.951 19:07:07 -- common/autotest_common.sh@10 -- # set +x 00:35:19.951 ************************************ 00:35:19.951 START TEST nvmf_dif 00:35:19.951 ************************************ 00:35:19.951 19:07:07 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:19.951 * Looking for test storage... 00:35:19.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.951 19:07:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.951 19:07:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.951 19:07:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.951 19:07:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.951 19:07:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.951 19:07:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.951 19:07:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:19.951 19:07:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:19.951 19:07:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.951 19:07:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:19.951 19:07:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:19.951 19:07:07 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:19.951 19:07:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:21.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:21.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:21.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:21.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:21.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:35:21.850 00:35:21.850 --- 10.0.0.2 ping statistics --- 00:35:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.850 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:35:21.850 19:07:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:35:21.850 00:35:21.850 --- 10.0.0.1 ping statistics --- 00:35:21.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.851 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:35:21.851 19:07:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.851 19:07:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:21.851 19:07:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:21.851 19:07:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:22.786 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:22.786 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:22.786 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:22.786 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:22.786 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:22.786 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:22.786 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:22.786 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:22.786 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:22.786 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:22.786 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:22.786 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:22.786 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:22.786 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:22.786 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:22.786 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:22.786 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:22.786 19:07:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:22.786 19:07:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3764738 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:22.786 19:07:10 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3764738 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3764738 ']' 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:22.786 19:07:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.786 [2024-07-14 19:07:11.009194] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:22.786 [2024-07-14 19:07:11.009286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.045 EAL: No free 2048 kB hugepages reported on node 1 00:35:23.045 [2024-07-14 19:07:11.074333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.045 [2024-07-14 19:07:11.157344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.045 [2024-07-14 19:07:11.157398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.045 [2024-07-14 19:07:11.157427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.045 [2024-07-14 19:07:11.157438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.045 [2024-07-14 19:07:11.157447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.045 [2024-07-14 19:07:11.157480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:23.045 19:07:11 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:23.045 19:07:11 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:23.303 19:07:11 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 19:07:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:23.303 19:07:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:23.303 19:07:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 [2024-07-14 19:07:11.304161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.303 19:07:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 ************************************ 00:35:23.303 START TEST fio_dif_1_default 00:35:23.303 ************************************ 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 bdev_null0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:23.303 [2024-07-14 19:07:11.364457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.303 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:23.303 { 00:35:23.303 "params": { 00:35:23.303 "name": "Nvme$subsystem", 00:35:23.303 "trtype": "$TEST_TRANSPORT", 00:35:23.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.303 "adrfam": "ipv4", 00:35:23.303 "trsvcid": "$NVMF_PORT", 00:35:23.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.303 "hdgst": ${hdgst:-false}, 00:35:23.303 "ddgst": ${ddgst:-false} 00:35:23.303 }, 00:35:23.303 "method": "bdev_nvme_attach_controller" 00:35:23.303 } 00:35:23.303 EOF 00:35:23.303 )") 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:23.304 "params": { 00:35:23.304 "name": "Nvme0", 00:35:23.304 "trtype": "tcp", 00:35:23.304 "traddr": "10.0.0.2", 00:35:23.304 "adrfam": "ipv4", 00:35:23.304 "trsvcid": "4420", 00:35:23.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.304 "hdgst": false, 00:35:23.304 "ddgst": false 00:35:23.304 }, 00:35:23.304 "method": "bdev_nvme_attach_controller" 00:35:23.304 }' 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:23.304 19:07:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.563 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:23.563 fio-3.35 00:35:23.563 Starting 1 thread 00:35:23.563 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.780 00:35:35.780 filename0: (groupid=0, jobs=1): err= 0: pid=3764966: Sun Jul 14 19:07:22 2024 00:35:35.780 read: IOPS=188, BW=755KiB/s (773kB/s)(7584KiB/10041msec) 00:35:35.780 slat (nsec): min=4869, max=47901, avg=9282.21, stdev=2541.32 00:35:35.780 clat (usec): min=556, max=46097, avg=21153.19, stdev=20318.43 00:35:35.780 lat (usec): min=564, max=46113, avg=21162.48, stdev=20318.42 00:35:35.780 clat percentiles (usec): 00:35:35.780 | 1.00th=[ 603], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 660], 00:35:35.780 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[41157], 60.00th=[41157], 00:35:35.780 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:35.780 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:35:35.780 | 99.99th=[45876] 00:35:35.780 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=756.80, stdev=29.87, samples=20 00:35:35.780 iops : min= 168, max= 192, avg=189.20, stdev= 7.47, samples=20 00:35:35.780 lat (usec) : 750=49.31%, 1000=0.26% 00:35:35.780 lat (msec) : 50=50.42% 00:35:35.780 cpu : usr=90.26%, sys=9.46%, ctx=19, majf=0, minf=230 00:35:35.780 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:35.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:35.780 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:35.780 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:35.780 00:35:35.780 Run status group 0 (all jobs): 00:35:35.780 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7584KiB (7766kB), run=10041-10041msec 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 00:35:35.780 real 0m11.083s 00:35:35.780 user 0m10.206s 00:35:35.780 sys 0m1.226s 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 ************************************ 00:35:35.780 END TEST fio_dif_1_default 00:35:35.780 ************************************ 00:35:35.780 19:07:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:35.780 19:07:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:35.780 19:07:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:35.780 19:07:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 ************************************ 00:35:35.780 START TEST fio_dif_1_multi_subsystems 00:35:35.780 ************************************ 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 bdev_null0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 [2024-07-14 19:07:22.490043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 bdev_null1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.780 { 00:35:35.780 "params": { 00:35:35.780 "name": "Nvme$subsystem", 00:35:35.780 "trtype": "$TEST_TRANSPORT", 00:35:35.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.780 "adrfam": "ipv4", 00:35:35.780 "trsvcid": "$NVMF_PORT", 00:35:35.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.780 "hdgst": ${hdgst:-false}, 00:35:35.780 "ddgst": ${ddgst:-false} 00:35:35.780 }, 00:35:35.780 "method": "bdev_nvme_attach_controller" 00:35:35.780 } 00:35:35.780 EOF 00:35:35.780 )") 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:35.780 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:35.781 { 00:35:35.781 "params": { 00:35:35.781 "name": "Nvme$subsystem", 00:35:35.781 "trtype": "$TEST_TRANSPORT", 00:35:35.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:35.781 "adrfam": "ipv4", 00:35:35.781 "trsvcid": "$NVMF_PORT", 00:35:35.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:35.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:35.781 "hdgst": ${hdgst:-false}, 00:35:35.781 "ddgst": ${ddgst:-false} 00:35:35.781 }, 00:35:35.781 "method": "bdev_nvme_attach_controller" 00:35:35.781 } 00:35:35.781 EOF 00:35:35.781 )") 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:35.781 "params": { 00:35:35.781 "name": "Nvme0", 00:35:35.781 "trtype": "tcp", 00:35:35.781 "traddr": "10.0.0.2", 00:35:35.781 "adrfam": "ipv4", 00:35:35.781 "trsvcid": "4420", 00:35:35.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:35.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:35.781 "hdgst": false, 00:35:35.781 "ddgst": false 00:35:35.781 }, 00:35:35.781 "method": "bdev_nvme_attach_controller" 00:35:35.781 },{ 00:35:35.781 "params": { 00:35:35.781 "name": "Nvme1", 00:35:35.781 "trtype": "tcp", 00:35:35.781 "traddr": "10.0.0.2", 00:35:35.781 "adrfam": "ipv4", 00:35:35.781 "trsvcid": "4420", 00:35:35.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:35.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:35.781 "hdgst": false, 00:35:35.781 "ddgst": false 00:35:35.781 }, 00:35:35.781 "method": "bdev_nvme_attach_controller" 00:35:35.781 }' 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:35.781 19:07:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:35.781 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:35.781 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:35.781 fio-3.35 00:35:35.781 Starting 2 threads 00:35:35.781 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.764 00:35:45.764 filename0: (groupid=0, jobs=1): err= 0: pid=3766364: Sun Jul 14 19:07:33 2024 00:35:45.764 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10034msec) 00:35:45.764 slat (nsec): min=7144, max=42368, avg=9275.21, stdev=3057.29 00:35:45.764 clat (usec): min=40777, max=43716, avg=41262.35, stdev=485.45 00:35:45.764 lat (usec): min=40784, max=43758, avg=41271.63, stdev=485.58 00:35:45.764 clat percentiles (usec): 00:35:45.764 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:45.764 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:45.764 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:45.764 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:35:45.764 | 99.99th=[43779] 00:35:45.764 bw ( KiB/s): min= 384, max= 416, per=33.80%, avg=387.20, stdev= 9.85, samples=20 00:35:45.764 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:35:45.764 lat (msec) : 50=100.00% 00:35:45.764 cpu : usr=94.11%, sys=5.59%, ctx=12, majf=0, minf=142 00:35:45.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.764 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.764 filename1: (groupid=0, jobs=1): err= 0: pid=3766365: Sun Jul 14 19:07:33 2024 00:35:45.764 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:35:45.764 slat (nsec): min=7131, max=81504, avg=9428.03, stdev=3860.48 00:35:45.764 clat (usec): min=585, max=44704, avg=21035.65, stdev=20283.38 00:35:45.764 lat (usec): min=593, max=44746, avg=21045.07, stdev=20283.34 00:35:45.764 clat percentiles (usec): 00:35:45.764 | 1.00th=[ 635], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 693], 00:35:45.764 | 30.00th=[ 709], 40.00th=[ 750], 50.00th=[40633], 60.00th=[41157], 00:35:45.764 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:45.764 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:45.764 | 99.99th=[44827] 00:35:45.764 bw ( KiB/s): min= 704, max= 768, per=66.21%, avg=758.40, stdev=21.02, samples=20 00:35:45.764 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:35:45.764 lat (usec) : 750=40.68%, 1000=8.16% 00:35:45.764 lat (msec) : 2=1.05%, 50=50.11% 00:35:45.764 cpu : usr=94.33%, sys=5.37%, ctx=14, majf=0, minf=180 00:35:45.764 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:45.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.764 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.764 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:45.764 00:35:45.764 Run status group 0 (all jobs): 00:35:45.764 READ: bw=1145KiB/s (1172kB/s), 387KiB/s-760KiB/s (397kB/s-778kB/s), io=11.2MiB (11.8MB), run=10006-10034msec 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.764 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.764 00:35:45.764 real 0m11.302s 00:35:45.764 user 0m20.230s 00:35:45.764 sys 0m1.395s 00:35:45.765 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 ************************************ 00:35:45.765 END TEST fio_dif_1_multi_subsystems 00:35:45.765 ************************************ 00:35:45.765 19:07:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:45.765 19:07:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:45.765 19:07:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:45.765 19:07:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 ************************************ 00:35:45.765 START TEST fio_dif_rand_params 00:35:45.765 ************************************ 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 bdev_null0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:45.765 [2024-07-14 19:07:33.847139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:45.765 { 00:35:45.765 "params": { 00:35:45.765 "name": "Nvme$subsystem", 00:35:45.765 "trtype": "$TEST_TRANSPORT", 00:35:45.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:45.765 "adrfam": "ipv4", 00:35:45.765 "trsvcid": "$NVMF_PORT", 00:35:45.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:45.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:45.765 "hdgst": ${hdgst:-false}, 00:35:45.765 "ddgst": ${ddgst:-false} 00:35:45.765 }, 00:35:45.765 "method": "bdev_nvme_attach_controller" 00:35:45.765 } 00:35:45.765 EOF 00:35:45.765 )") 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:45.765 "params": { 00:35:45.765 "name": "Nvme0", 00:35:45.765 "trtype": "tcp", 00:35:45.765 "traddr": "10.0.0.2", 00:35:45.765 "adrfam": "ipv4", 00:35:45.765 "trsvcid": "4420", 00:35:45.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:45.765 "hdgst": false, 00:35:45.765 "ddgst": false 00:35:45.765 }, 00:35:45.765 "method": "bdev_nvme_attach_controller" 00:35:45.765 }' 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:45.765 19:07:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.023 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:46.023 ... 00:35:46.023 fio-3.35 00:35:46.023 Starting 3 threads 00:35:46.023 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.578 00:35:52.578 filename0: (groupid=0, jobs=1): err= 0: pid=3767759: Sun Jul 14 19:07:39 2024 00:35:52.578 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(147MiB/5045msec) 00:35:52.578 slat (nsec): min=5017, max=48133, avg=20058.36, stdev=5517.20 00:35:52.578 clat (usec): min=4826, max=54303, avg=12786.31, stdev=7161.01 00:35:52.578 lat (usec): min=4838, max=54320, avg=12806.37, stdev=7160.81 00:35:52.578 clat percentiles (usec): 00:35:52.578 | 1.00th=[ 6325], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9372], 00:35:52.578 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:35:52.578 | 70.00th=[12911], 80.00th=[13698], 90.00th=[14877], 95.00th=[16319], 00:35:52.578 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[54264], 00:35:52.578 | 99.99th=[54264] 00:35:52.578 bw ( KiB/s): min=22528, max=34560, per=35.11%, avg=30105.60, stdev=3785.95, samples=10 00:35:52.579 iops : min= 176, max= 270, avg=235.20, stdev=29.58, samples=10 00:35:52.579 lat (msec) : 10=24.79%, 20=71.90%, 50=1.78%, 100=1.53% 00:35:52.579 cpu : usr=91.02%, sys=7.57%, ctx=289, majf=0, minf=122 00:35:52.579 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 issued rwts: total=1178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.579 filename0: (groupid=0, jobs=1): err= 0: pid=3767760: Sun Jul 14 19:07:39 2024 00:35:52.579 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(131MiB/5045msec) 00:35:52.579 slat (nsec): min=4953, max=84658, avg=15425.33, stdev=4211.82 00:35:52.579 clat (usec): min=5418, max=55509, avg=14436.39, stdev=10081.04 00:35:52.579 lat (usec): min=5430, max=55524, avg=14451.82, stdev=10081.10 00:35:52.579 clat percentiles (usec): 00:35:52.579 | 1.00th=[ 5866], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[10552], 00:35:52.579 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:35:52.579 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15270], 95.00th=[49021], 00:35:52.579 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:35:52.579 | 99.99th=[55313] 00:35:52.579 bw ( KiB/s): min=16384, max=30720, per=31.09%, avg=26654.50, stdev=4302.54, samples=10 00:35:52.579 iops : min= 128, max= 240, avg=208.20, stdev=33.63, samples=10 00:35:52.579 lat (msec) : 10=15.61%, 20=77.59%, 50=2.59%, 100=4.21% 00:35:52.579 cpu : usr=94.35%, sys=5.23%, ctx=12, majf=0, minf=116 00:35:52.579 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.579 filename0: (groupid=0, jobs=1): err= 0: pid=3767761: Sun Jul 14 19:07:39 2024 00:35:52.579 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5046msec) 00:35:52.579 slat (nsec): min=4987, max=51383, avg=17101.07, stdev=4779.79 00:35:52.579 clat (usec): min=4418, max=93816, avg=13014.61, stdev=7195.17 00:35:52.579 lat (usec): min=4430, max=93831, avg=13031.71, stdev=7195.74 00:35:52.579 clat percentiles (usec): 00:35:52.579 | 1.00th=[ 5211], 5.00th=[ 6718], 10.00th=[ 8029], 20.00th=[ 9372], 00:35:52.579 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12387], 60.00th=[13042], 00:35:52.579 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15926], 95.00th=[16712], 00:35:52.579 | 99.00th=[52167], 99.50th=[53740], 99.90th=[55837], 99.95th=[93848], 00:35:52.579 | 99.99th=[93848] 00:35:52.579 bw ( KiB/s): min=26368, max=37194, per=34.49%, avg=29575.40, stdev=3357.01, samples=10 00:35:52.579 iops : min= 206, max= 290, avg=231.00, stdev=26.08, samples=10 00:35:52.579 lat (msec) : 10=25.22%, 20=72.11%, 50=1.04%, 100=1.64% 00:35:52.579 cpu : usr=92.27%, sys=6.72%, ctx=77, majf=0, minf=90 00:35:52.579 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.579 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.579 00:35:52.579 Run status group 0 (all jobs): 00:35:52.579 READ: bw=83.7MiB/s (87.8MB/s), 25.9MiB/s-29.2MiB/s (27.1MB/s-30.6MB/s), io=423MiB (443MB), run=5045-5046msec 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 bdev_null0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 [2024-07-14 19:07:40.050352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 bdev_null1 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.579 bdev_null2 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.579 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.580 { 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme$subsystem", 00:35:52.580 "trtype": "$TEST_TRANSPORT", 00:35:52.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "$NVMF_PORT", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.580 "hdgst": ${hdgst:-false}, 00:35:52.580 "ddgst": ${ddgst:-false} 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 } 00:35:52.580 EOF 00:35:52.580 )") 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.580 { 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme$subsystem", 00:35:52.580 "trtype": "$TEST_TRANSPORT", 00:35:52.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "$NVMF_PORT", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.580 "hdgst": ${hdgst:-false}, 00:35:52.580 "ddgst": ${ddgst:-false} 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 } 00:35:52.580 EOF 00:35:52.580 )") 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:52.580 { 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme$subsystem", 00:35:52.580 "trtype": "$TEST_TRANSPORT", 00:35:52.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "$NVMF_PORT", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:52.580 "hdgst": ${hdgst:-false}, 00:35:52.580 "ddgst": ${ddgst:-false} 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 } 00:35:52.580 EOF 00:35:52.580 )") 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme0", 00:35:52.580 "trtype": "tcp", 00:35:52.580 "traddr": "10.0.0.2", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "4420", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:52.580 "hdgst": false, 00:35:52.580 "ddgst": false 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 },{ 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme1", 00:35:52.580 "trtype": "tcp", 00:35:52.580 "traddr": "10.0.0.2", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "4420", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:52.580 "hdgst": false, 00:35:52.580 "ddgst": false 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 },{ 00:35:52.580 "params": { 00:35:52.580 "name": "Nvme2", 00:35:52.580 "trtype": "tcp", 00:35:52.580 "traddr": "10.0.0.2", 00:35:52.580 "adrfam": "ipv4", 00:35:52.580 "trsvcid": "4420", 00:35:52.580 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:52.580 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:52.580 "hdgst": false, 00:35:52.580 "ddgst": false 00:35:52.580 }, 00:35:52.580 "method": "bdev_nvme_attach_controller" 00:35:52.580 }' 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:52.580 19:07:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:52.580 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.580 ... 00:35:52.580 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.580 ... 00:35:52.580 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:52.580 ... 00:35:52.580 fio-3.35 00:35:52.580 Starting 24 threads 00:35:52.580 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.775 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768625: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=487, BW=1950KiB/s (1996kB/s)(19.1MiB/10006msec) 00:36:04.775 slat (nsec): min=3757, max=96486, avg=23752.63, stdev=12749.16 00:36:04.775 clat (usec): min=1621, max=42331, avg=32626.75, stdev=4306.18 00:36:04.775 lat (usec): min=1629, max=42341, avg=32650.50, stdev=4308.02 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[ 5145], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:36:04.775 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.775 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34866], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:36:04.775 | 99.99th=[42206] 00:36:04.775 bw ( KiB/s): min= 1792, max= 2792, per=4.27%, avg=1952.42, stdev=207.25, samples=19 00:36:04.775 iops : min= 448, max= 698, avg=488.11, stdev=51.81, samples=19 00:36:04.775 lat (msec) : 2=0.66%, 10=0.98%, 20=1.25%, 50=97.11% 00:36:04.775 cpu : usr=95.54%, sys=2.50%, ctx=261, majf=0, minf=28 00:36:04.775 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768626: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:36:04.775 slat (usec): min=13, max=117, avg=51.42, stdev=16.76 00:36:04.775 clat (usec): min=17202, max=44464, avg=33120.01, stdev=1264.46 00:36:04.775 lat (usec): min=17239, max=44522, avg=33171.43, stdev=1264.79 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.775 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:04.775 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34341], 99.50th=[34866], 99.90th=[44303], 99.95th=[44303], 00:36:04.775 | 99.99th=[44303] 00:36:04.775 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.775 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.775 lat (msec) : 20=0.34%, 50=99.66% 00:36:04.775 cpu : usr=96.49%, sys=2.34%, ctx=147, majf=0, minf=25 00:36:04.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768627: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:36:04.775 slat (nsec): min=8661, max=97839, avg=40275.35, stdev=17504.44 00:36:04.775 clat (usec): min=25486, max=40210, avg=33220.67, stdev=845.31 00:36:04.775 lat (usec): min=25495, max=40251, avg=33260.95, stdev=841.81 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.775 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.775 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34866], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:36:04.775 | 99.99th=[40109] 00:36:04.775 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.775 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.775 lat (msec) : 50=100.00% 00:36:04.775 cpu : usr=96.44%, sys=2.40%, ctx=206, majf=0, minf=22 00:36:04.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768628: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:36:04.775 slat (usec): min=8, max=102, avg=42.05, stdev=19.34 00:36:04.775 clat (usec): min=15552, max=44991, avg=33178.83, stdev=1388.65 00:36:04.775 lat (usec): min=15561, max=45010, avg=33220.88, stdev=1387.47 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.775 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.775 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34866], 99.50th=[35390], 99.90th=[44827], 99.95th=[44827], 00:36:04.775 | 99.99th=[44827] 00:36:04.775 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.775 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.775 lat (msec) : 20=0.34%, 50=99.66% 00:36:04.775 cpu : usr=95.10%, sys=2.90%, ctx=292, majf=0, minf=21 00:36:04.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768629: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10020msec) 00:36:04.775 slat (usec): min=8, max=101, avg=41.01, stdev=19.10 00:36:04.775 clat (usec): min=19926, max=52752, avg=33252.14, stdev=1438.20 00:36:04.775 lat (usec): min=19952, max=52794, avg=33293.15, stdev=1436.67 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.775 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.775 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34866], 99.50th=[34866], 99.90th=[52691], 99.95th=[52691], 00:36:04.775 | 99.99th=[52691] 00:36:04.775 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1900.80, stdev=46.89, samples=20 00:36:04.775 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:36:04.775 lat (msec) : 20=0.06%, 50=99.60%, 100=0.34% 00:36:04.775 cpu : usr=96.71%, sys=2.22%, ctx=98, majf=0, minf=24 00:36:04.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.775 filename0: (groupid=0, jobs=1): err= 0: pid=3768630: Sun Jul 14 19:07:51 2024 00:36:04.775 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:36:04.775 slat (usec): min=12, max=138, avg=39.78, stdev=13.32 00:36:04.775 clat (usec): min=11254, max=66623, avg=33196.05, stdev=2543.18 00:36:04.775 lat (usec): min=11290, max=66647, avg=33235.83, stdev=2542.81 00:36:04.775 clat percentiles (usec): 00:36:04.775 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.775 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.775 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.775 | 99.00th=[34341], 99.50th=[34866], 99.90th=[66323], 99.95th=[66847], 00:36:04.775 | 99.99th=[66847] 00:36:04.775 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.775 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.775 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:36:04.775 cpu : usr=97.59%, sys=1.77%, ctx=37, majf=0, minf=26 00:36:04.775 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.775 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.775 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename0: (groupid=0, jobs=1): err= 0: pid=3768631: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:36:04.776 slat (usec): min=15, max=113, avg=40.81, stdev=14.88 00:36:04.776 clat (usec): min=25767, max=61743, avg=33304.73, stdev=1736.23 00:36:04.776 lat (usec): min=25819, max=61787, avg=33345.54, stdev=1736.36 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.776 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34341], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:36:04.776 | 99.99th=[61604] 00:36:04.776 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.776 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.776 lat (msec) : 50=99.66%, 100=0.34% 00:36:04.776 cpu : usr=98.01%, sys=1.56%, ctx=14, majf=0, minf=19 00:36:04.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename0: (groupid=0, jobs=1): err= 0: pid=3768632: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:36:04.776 slat (usec): min=8, max=100, avg=19.10, stdev=13.51 00:36:04.776 clat (usec): min=13040, max=51734, avg=33421.54, stdev=1673.59 00:36:04.776 lat (usec): min=13112, max=51745, avg=33440.64, stdev=1670.27 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[30278], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:04.776 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:36:04.776 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:04.776 | 99.00th=[34866], 99.50th=[35390], 99.90th=[49546], 99.95th=[49546], 00:36:04.776 | 99.99th=[51643] 00:36:04.776 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.776 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.776 lat (msec) : 20=0.38%, 50=99.58%, 100=0.04% 00:36:04.776 cpu : usr=97.51%, sys=1.98%, ctx=30, majf=0, minf=20 00:36:04.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768633: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:36:04.776 slat (nsec): min=13458, max=87753, avg=38939.47, stdev=11582.32 00:36:04.776 clat (usec): min=11269, max=66597, avg=33216.16, stdev=2533.10 00:36:04.776 lat (usec): min=11292, max=66640, avg=33255.10, stdev=2533.59 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:04.776 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34341], 99.50th=[34866], 99.90th=[66323], 99.95th=[66323], 00:36:04.776 | 99.99th=[66847] 00:36:04.776 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.776 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.776 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:36:04.776 cpu : usr=98.22%, sys=1.40%, ctx=15, majf=0, minf=17 00:36:04.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768634: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10007msec) 00:36:04.776 slat (nsec): min=8183, max=84898, avg=32689.31, stdev=15680.89 00:36:04.776 clat (usec): min=10791, max=59890, avg=33283.52, stdev=2136.34 00:36:04.776 lat (usec): min=10803, max=59931, avg=33316.21, stdev=2136.78 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[29492], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:04.776 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34866], 99.50th=[35914], 99.90th=[60031], 99.95th=[60031], 00:36:04.776 | 99.99th=[60031] 00:36:04.776 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.05, stdev=53.61, samples=19 00:36:04.776 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:36:04.776 lat (msec) : 20=0.44%, 50=99.22%, 100=0.34% 00:36:04.776 cpu : usr=98.12%, sys=1.49%, ctx=15, majf=0, minf=24 00:36:04.776 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768635: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:36:04.776 slat (nsec): min=8804, max=89711, avg=36509.46, stdev=13593.86 00:36:04.776 clat (usec): min=25885, max=61575, avg=33389.77, stdev=1717.13 00:36:04.776 lat (usec): min=25943, max=61618, avg=33426.28, stdev=1716.59 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:04.776 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34866], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:36:04.776 | 99.99th=[61604] 00:36:04.776 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.776 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.776 lat (msec) : 50=99.66%, 100=0.34% 00:36:04.776 cpu : usr=95.67%, sys=2.73%, ctx=231, majf=0, minf=27 00:36:04.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768636: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:36:04.776 slat (nsec): min=8536, max=96150, avg=47451.42, stdev=14921.61 00:36:04.776 clat (usec): min=11168, max=68769, avg=33161.29, stdev=2374.44 00:36:04.776 lat (usec): min=11211, max=68817, avg=33208.74, stdev=2374.41 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.776 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:04.776 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34866], 99.50th=[42730], 99.90th=[59507], 99.95th=[59507], 00:36:04.776 | 99.99th=[68682] 00:36:04.776 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.05, stdev=53.61, samples=19 00:36:04.776 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:36:04.776 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:36:04.776 cpu : usr=98.07%, sys=1.53%, ctx=19, majf=0, minf=26 00:36:04.776 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768637: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10004msec) 00:36:04.776 slat (usec): min=8, max=111, avg=40.72, stdev=20.79 00:36:04.776 clat (usec): min=13364, max=54402, avg=33226.19, stdev=1780.91 00:36:04.776 lat (usec): min=13448, max=54430, avg=33266.92, stdev=1777.22 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[30278], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.776 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34866], 99.50th=[35390], 99.90th=[51119], 99.95th=[52691], 00:36:04.776 | 99.99th=[54264] 00:36:04.776 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:36:04.776 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:36:04.776 lat (msec) : 20=0.46%, 50=99.37%, 100=0.17% 00:36:04.776 cpu : usr=97.77%, sys=1.82%, ctx=15, majf=0, minf=24 00:36:04.776 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.776 filename1: (groupid=0, jobs=1): err= 0: pid=3768638: Sun Jul 14 19:07:51 2024 00:36:04.776 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:36:04.776 slat (nsec): min=8455, max=85085, avg=35350.04, stdev=15646.27 00:36:04.776 clat (usec): min=25798, max=61479, avg=33407.49, stdev=1722.38 00:36:04.776 lat (usec): min=25828, max=61511, avg=33442.84, stdev=1720.21 00:36:04.776 clat percentiles (usec): 00:36:04.776 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:04.776 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.776 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.776 | 99.00th=[34866], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:36:04.776 | 99.99th=[61604] 00:36:04.776 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.776 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.776 lat (msec) : 50=99.66%, 100=0.34% 00:36:04.776 cpu : usr=98.11%, sys=1.52%, ctx=12, majf=0, minf=16 00:36:04.776 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.776 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.776 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename1: (groupid=0, jobs=1): err= 0: pid=3768639: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:36:04.777 slat (usec): min=12, max=222, avg=49.24, stdev=18.17 00:36:04.777 clat (usec): min=17197, max=48326, avg=33144.45, stdev=1399.15 00:36:04.777 lat (usec): min=17238, max=48386, avg=33193.69, stdev=1399.14 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:04.777 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34341], 99.50th=[34866], 99.90th=[47973], 99.95th=[48497], 00:36:04.777 | 99.99th=[48497] 00:36:04.777 bw ( KiB/s): min= 1788, max= 2048, per=4.15%, avg=1899.58, stdev=64.57, samples=19 00:36:04.777 iops : min= 447, max= 512, avg=474.89, stdev=16.14, samples=19 00:36:04.777 lat (msec) : 20=0.34%, 50=99.66% 00:36:04.777 cpu : usr=97.80%, sys=1.80%, ctx=25, majf=0, minf=24 00:36:04.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename1: (groupid=0, jobs=1): err= 0: pid=3768640: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10002msec) 00:36:04.777 slat (usec): min=8, max=115, avg=32.48, stdev=23.99 00:36:04.777 clat (usec): min=4429, max=39922, avg=33062.01, stdev=2512.47 00:36:04.777 lat (usec): min=4470, max=39957, avg=33094.49, stdev=2510.57 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[14746], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.777 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:04.777 | 99.00th=[34866], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:36:04.777 | 99.99th=[40109] 00:36:04.777 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1920.00, stdev=42.67, samples=19 00:36:04.777 iops : min= 448, max= 512, avg=480.00, stdev=10.67, samples=19 00:36:04.777 lat (msec) : 10=0.33%, 20=0.67%, 50=99.00% 00:36:04.777 cpu : usr=97.96%, sys=1.63%, ctx=14, majf=0, minf=22 00:36:04.777 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768641: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10009msec) 00:36:04.777 slat (usec): min=8, max=110, avg=43.71, stdev=19.43 00:36:04.777 clat (usec): min=25526, max=39937, avg=33194.10, stdev=842.07 00:36:04.777 lat (usec): min=25561, max=39979, avg=33237.81, stdev=838.11 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[30540], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.777 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34866], 99.50th=[35390], 99.90th=[39584], 99.95th=[40109], 00:36:04.777 | 99.99th=[40109] 00:36:04.777 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.777 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.777 lat (msec) : 50=100.00% 00:36:04.777 cpu : usr=98.15%, sys=1.45%, ctx=11, majf=0, minf=22 00:36:04.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768642: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:36:04.777 slat (usec): min=9, max=111, avg=39.29, stdev=11.95 00:36:04.777 clat (usec): min=10430, max=66649, avg=33212.49, stdev=2572.63 00:36:04.777 lat (usec): min=10439, max=66671, avg=33251.78, stdev=2572.22 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.777 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34866], 99.50th=[35390], 99.90th=[66847], 99.95th=[66847], 00:36:04.777 | 99.99th=[66847] 00:36:04.777 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.777 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.777 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:36:04.777 cpu : usr=98.08%, sys=1.53%, ctx=7, majf=0, minf=22 00:36:04.777 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768643: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:36:04.777 slat (usec): min=14, max=122, avg=40.68, stdev=14.84 00:36:04.777 clat (usec): min=25774, max=61756, avg=33306.31, stdev=1736.30 00:36:04.777 lat (usec): min=25826, max=61801, avg=33346.99, stdev=1736.46 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:36:04.777 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34341], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:36:04.777 | 99.99th=[61604] 00:36:04.777 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.777 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.777 lat (msec) : 50=99.66%, 100=0.34% 00:36:04.777 cpu : usr=98.33%, sys=1.25%, ctx=14, majf=0, minf=19 00:36:04.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768644: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:36:04.777 slat (nsec): min=8113, max=70840, avg=25428.08, stdev=11090.18 00:36:04.777 clat (usec): min=15056, max=70968, avg=33447.26, stdev=1686.24 00:36:04.777 lat (usec): min=15106, max=71038, avg=33472.69, stdev=1686.72 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:36:04.777 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.777 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34866], 99.50th=[35390], 99.90th=[58459], 99.95th=[58459], 00:36:04.777 | 99.99th=[70779] 00:36:04.777 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.79, stdev=47.95, samples=19 00:36:04.777 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:36:04.777 lat (msec) : 20=0.04%, 50=99.62%, 100=0.34% 00:36:04.777 cpu : usr=96.41%, sys=2.09%, ctx=184, majf=0, minf=32 00:36:04.777 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768645: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:36:04.777 slat (usec): min=8, max=169, avg=28.49, stdev=19.33 00:36:04.777 clat (usec): min=4245, max=40384, avg=33094.77, stdev=2586.91 00:36:04.777 lat (usec): min=4272, max=40412, avg=33123.26, stdev=2585.09 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[14353], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:36:04.777 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.777 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34866], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:36:04.777 | 99.99th=[40633] 00:36:04.777 bw ( KiB/s): min= 1792, max= 2176, per=4.18%, avg=1913.26, stdev=79.52, samples=19 00:36:04.777 iops : min= 448, max= 544, avg=478.32, stdev=19.88, samples=19 00:36:04.777 lat (msec) : 10=0.67%, 20=0.33%, 50=99.00% 00:36:04.777 cpu : usr=96.32%, sys=2.35%, ctx=149, majf=0, minf=34 00:36:04.777 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768646: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:36:04.777 slat (usec): min=8, max=103, avg=45.73, stdev=13.74 00:36:04.777 clat (usec): min=11239, max=66431, avg=33171.08, stdev=2540.43 00:36:04.777 lat (usec): min=11274, max=66474, avg=33216.81, stdev=2540.90 00:36:04.777 clat percentiles (usec): 00:36:04.777 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:36:04.777 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:04.777 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.777 | 99.00th=[34866], 99.50th=[34866], 99.90th=[66323], 99.95th=[66323], 00:36:04.777 | 99.99th=[66323] 00:36:04.777 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.777 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.777 lat (msec) : 20=0.67%, 50=98.99%, 100=0.34% 00:36:04.777 cpu : usr=98.18%, sys=1.40%, ctx=20, majf=0, minf=21 00:36:04.777 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:04.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.777 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.777 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.777 filename2: (groupid=0, jobs=1): err= 0: pid=3768647: Sun Jul 14 19:07:51 2024 00:36:04.777 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10009msec) 00:36:04.778 slat (usec): min=8, max=102, avg=36.23, stdev=13.90 00:36:04.778 clat (usec): min=25871, max=61608, avg=33402.69, stdev=1738.95 00:36:04.778 lat (usec): min=25916, max=61651, avg=33438.91, stdev=1738.10 00:36:04.778 clat percentiles (usec): 00:36:04.778 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:36:04.778 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:36:04.778 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:04.778 | 99.00th=[34866], 99.50th=[35390], 99.90th=[61604], 99.95th=[61604], 00:36:04.778 | 99.99th=[61604] 00:36:04.778 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.778 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.778 lat (msec) : 50=99.66%, 100=0.34% 00:36:04.778 cpu : usr=97.60%, sys=1.70%, ctx=75, majf=0, minf=23 00:36:04.778 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:04.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.778 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.778 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.778 filename2: (groupid=0, jobs=1): err= 0: pid=3768648: Sun Jul 14 19:07:51 2024 00:36:04.778 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:36:04.778 slat (usec): min=8, max=109, avg=45.40, stdev=17.27 00:36:04.778 clat (usec): min=6192, max=66425, avg=33026.27, stdev=3016.12 00:36:04.778 lat (usec): min=6201, max=66476, avg=33071.67, stdev=3018.95 00:36:04.778 clat percentiles (usec): 00:36:04.778 | 1.00th=[19792], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:36:04.778 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:36:04.778 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:04.778 | 99.00th=[36439], 99.50th=[45876], 99.90th=[66323], 99.95th=[66323], 00:36:04.778 | 99.99th=[66323] 00:36:04.778 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=68.52, samples=19 00:36:04.778 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:36:04.778 lat (msec) : 10=0.13%, 20=0.88%, 50=98.66%, 100=0.33% 00:36:04.778 cpu : usr=96.99%, sys=2.04%, ctx=39, majf=0, minf=23 00:36:04.778 IO depths : 1=5.8%, 2=11.7%, 4=23.7%, 8=51.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:04.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.778 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.778 issued rwts: total=4786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.778 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:04.778 00:36:04.778 Run status group 0 (all jobs): 00:36:04.778 READ: bw=44.6MiB/s (46.8MB/s), 1899KiB/s-1950KiB/s (1945kB/s-1996kB/s), io=447MiB (469MB), run=10002-10020msec 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 bdev_null0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 [2024-07-14 19:07:51.863302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 bdev_null1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.778 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.778 { 00:36:04.778 "params": { 00:36:04.778 "name": "Nvme$subsystem", 00:36:04.778 "trtype": "$TEST_TRANSPORT", 00:36:04.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.778 "adrfam": "ipv4", 00:36:04.778 "trsvcid": "$NVMF_PORT", 00:36:04.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.778 "hdgst": ${hdgst:-false}, 00:36:04.778 "ddgst": ${ddgst:-false} 00:36:04.778 }, 00:36:04.779 "method": "bdev_nvme_attach_controller" 00:36:04.779 } 00:36:04.779 EOF 00:36:04.779 )") 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.779 { 00:36:04.779 "params": { 00:36:04.779 "name": "Nvme$subsystem", 00:36:04.779 "trtype": "$TEST_TRANSPORT", 00:36:04.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.779 "adrfam": "ipv4", 00:36:04.779 "trsvcid": "$NVMF_PORT", 00:36:04.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.779 "hdgst": ${hdgst:-false}, 00:36:04.779 "ddgst": ${ddgst:-false} 00:36:04.779 }, 00:36:04.779 "method": "bdev_nvme_attach_controller" 00:36:04.779 } 00:36:04.779 EOF 00:36:04.779 )") 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.779 "params": { 00:36:04.779 "name": "Nvme0", 00:36:04.779 "trtype": "tcp", 00:36:04.779 "traddr": "10.0.0.2", 00:36:04.779 "adrfam": "ipv4", 00:36:04.779 "trsvcid": "4420", 00:36:04.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.779 "hdgst": false, 00:36:04.779 "ddgst": false 00:36:04.779 }, 00:36:04.779 "method": "bdev_nvme_attach_controller" 00:36:04.779 },{ 00:36:04.779 "params": { 00:36:04.779 "name": "Nvme1", 00:36:04.779 "trtype": "tcp", 00:36:04.779 "traddr": "10.0.0.2", 00:36:04.779 "adrfam": "ipv4", 00:36:04.779 "trsvcid": "4420", 00:36:04.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.779 "hdgst": false, 00:36:04.779 "ddgst": false 00:36:04.779 }, 00:36:04.779 "method": "bdev_nvme_attach_controller" 00:36:04.779 }' 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.779 19:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.779 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:04.779 ... 00:36:04.779 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:04.779 ... 00:36:04.779 fio-3.35 00:36:04.779 Starting 4 threads 00:36:04.779 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.036 00:36:10.036 filename0: (groupid=0, jobs=1): err= 0: pid=3769912: Sun Jul 14 19:07:57 2024 00:36:10.036 read: IOPS=1866, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5001msec) 00:36:10.036 slat (nsec): min=3970, max=69073, avg=21391.68, stdev=11709.87 00:36:10.036 clat (usec): min=912, max=8074, avg=4206.58, stdev=547.57 00:36:10.036 lat (usec): min=926, max=8086, avg=4227.97, stdev=547.03 00:36:10.036 clat percentiles (usec): 00:36:10.036 | 1.00th=[ 2769], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 3982], 00:36:10.036 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:10.036 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4948], 00:36:10.036 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7439], 99.95th=[ 7635], 00:36:10.036 | 99.99th=[ 8094] 00:36:10.036 bw ( KiB/s): min=13824, max=15232, per=24.89%, avg=14837.33, stdev=509.81, samples=9 00:36:10.036 iops : min= 1728, max= 1904, avg=1854.67, stdev=63.73, samples=9 00:36:10.036 lat (usec) : 1000=0.03% 00:36:10.036 lat (msec) : 2=0.55%, 4=20.87%, 10=78.55% 00:36:10.036 cpu : usr=94.24%, sys=5.22%, ctx=15, majf=0, minf=168 00:36:10.036 IO depths : 1=0.2%, 2=17.6%, 4=56.0%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 issued rwts: total=9333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:10.036 filename0: (groupid=0, jobs=1): err= 0: pid=3769913: Sun Jul 14 19:07:57 2024 00:36:10.036 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5003msec) 00:36:10.036 slat (nsec): min=3790, max=75796, avg=18768.01, stdev=10027.01 00:36:10.036 clat (usec): min=887, max=9345, avg=4229.89, stdev=517.89 00:36:10.036 lat (usec): min=908, max=9374, avg=4248.66, stdev=517.30 00:36:10.036 clat percentiles (usec): 00:36:10.036 | 1.00th=[ 2999], 5.00th=[ 3654], 10.00th=[ 3851], 20.00th=[ 4015], 00:36:10.036 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:10.036 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4883], 00:36:10.036 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7767], 99.95th=[ 9110], 00:36:10.036 | 99.99th=[ 9372] 00:36:10.036 bw ( KiB/s): min=14160, max=15104, per=25.02%, avg=14913.60, stdev=311.94, samples=10 00:36:10.036 iops : min= 1770, max= 1888, avg=1864.20, stdev=38.99, samples=10 00:36:10.036 lat (usec) : 1000=0.04% 00:36:10.036 lat (msec) : 2=0.25%, 4=18.11%, 10=81.60% 00:36:10.036 cpu : usr=95.14%, sys=4.30%, ctx=11, majf=0, minf=166 00:36:10.036 IO depths : 1=0.2%, 2=11.5%, 4=60.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 issued rwts: total=9326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:10.036 filename1: (groupid=0, jobs=1): err= 0: pid=3769914: Sun Jul 14 19:07:57 2024 00:36:10.036 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5002msec) 00:36:10.036 slat (nsec): min=3805, max=67129, avg=20699.72, stdev=10698.89 00:36:10.036 clat (usec): min=897, max=9649, avg=4187.92, stdev=500.38 00:36:10.036 lat (usec): min=909, max=9672, avg=4208.62, stdev=500.48 00:36:10.036 clat percentiles (usec): 00:36:10.036 | 1.00th=[ 2868], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 3982], 00:36:10.036 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:10.036 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4883], 00:36:10.036 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 8455], 00:36:10.036 | 99.99th=[ 9634] 00:36:10.036 bw ( KiB/s): min=14336, max=15424, per=25.20%, avg=15020.60, stdev=299.25, samples=10 00:36:10.036 iops : min= 1792, max= 1928, avg=1877.50, stdev=37.35, samples=10 00:36:10.036 lat (usec) : 1000=0.02% 00:36:10.036 lat (msec) : 2=0.29%, 4=20.71%, 10=78.98% 00:36:10.036 cpu : usr=94.86%, sys=4.56%, ctx=19, majf=0, minf=127 00:36:10.036 IO depths : 1=0.2%, 2=16.3%, 4=56.7%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 issued rwts: total=9391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:10.036 filename1: (groupid=0, jobs=1): err= 0: pid=3769915: Sun Jul 14 19:07:57 2024 00:36:10.036 read: IOPS=1845, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5001msec) 00:36:10.036 slat (usec): min=3, max=136, avg=21.98, stdev=12.11 00:36:10.036 clat (usec): min=729, max=8050, avg=4252.60, stdev=588.69 00:36:10.036 lat (usec): min=749, max=8075, avg=4274.58, stdev=587.50 00:36:10.036 clat percentiles (usec): 00:36:10.036 | 1.00th=[ 2311], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4015], 00:36:10.036 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:10.036 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5145], 00:36:10.036 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 7767], 00:36:10.036 | 99.99th=[ 8029] 00:36:10.036 bw ( KiB/s): min=13312, max=15232, per=24.74%, avg=14750.22, stdev=628.62, samples=9 00:36:10.036 iops : min= 1664, max= 1904, avg=1843.78, stdev=78.58, samples=9 00:36:10.036 lat (usec) : 750=0.01%, 1000=0.13% 00:36:10.036 lat (msec) : 2=0.68%, 4=16.76%, 10=82.41% 00:36:10.036 cpu : usr=93.04%, sys=5.50%, ctx=39, majf=0, minf=97 00:36:10.036 IO depths : 1=0.1%, 2=16.4%, 4=57.2%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.036 issued rwts: total=9228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:10.036 00:36:10.036 Run status group 0 (all jobs): 00:36:10.036 READ: bw=58.2MiB/s (61.0MB/s), 14.4MiB/s-14.7MiB/s (15.1MB/s-15.4MB/s), io=291MiB (305MB), run=5001-5003msec 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.036 00:36:10.036 real 0m24.396s 00:36:10.036 user 4m31.595s 00:36:10.036 sys 0m7.513s 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:10.036 19:07:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:10.036 ************************************ 00:36:10.036 END TEST fio_dif_rand_params 00:36:10.036 ************************************ 00:36:10.036 19:07:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:10.037 19:07:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:10.037 19:07:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:10.037 19:07:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:10.037 19:07:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:10.296 ************************************ 00:36:10.296 START TEST fio_dif_digest 00:36:10.296 ************************************ 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:10.296 bdev_null0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:10.296 [2024-07-14 19:07:58.294827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:10.296 { 00:36:10.296 "params": { 00:36:10.296 "name": "Nvme$subsystem", 00:36:10.296 "trtype": "$TEST_TRANSPORT", 00:36:10.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:10.296 "adrfam": "ipv4", 00:36:10.296 "trsvcid": "$NVMF_PORT", 00:36:10.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:10.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:10.296 "hdgst": ${hdgst:-false}, 00:36:10.296 "ddgst": ${ddgst:-false} 00:36:10.296 }, 00:36:10.296 "method": "bdev_nvme_attach_controller" 00:36:10.296 } 00:36:10.296 EOF 00:36:10.296 )") 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:10.296 "params": { 00:36:10.296 "name": "Nvme0", 00:36:10.296 "trtype": "tcp", 00:36:10.296 "traddr": "10.0.0.2", 00:36:10.296 "adrfam": "ipv4", 00:36:10.296 "trsvcid": "4420", 00:36:10.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:10.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:10.296 "hdgst": true, 00:36:10.296 "ddgst": true 00:36:10.296 }, 00:36:10.296 "method": "bdev_nvme_attach_controller" 00:36:10.296 }' 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:10.296 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:10.297 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:10.297 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:10.297 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:10.297 19:07:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:10.555 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:10.555 ... 00:36:10.555 fio-3.35 00:36:10.555 Starting 3 threads 00:36:10.555 EAL: No free 2048 kB hugepages reported on node 1 00:36:22.750 00:36:22.750 filename0: (groupid=0, jobs=1): err= 0: pid=3770777: Sun Jul 14 19:08:09 2024 00:36:22.750 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(255MiB/10045msec) 00:36:22.750 slat (nsec): min=4965, max=35846, avg=14745.21, stdev=1836.75 00:36:22.750 clat (usec): min=11564, max=53522, avg=14754.70, stdev=1485.86 00:36:22.750 lat (usec): min=11578, max=53537, avg=14769.44, stdev=1485.78 00:36:22.750 clat percentiles (usec): 00:36:22.750 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:36:22.750 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:36:22.750 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:36:22.750 | 99.00th=[17171], 99.50th=[17433], 99.90th=[22676], 99.95th=[46924], 00:36:22.750 | 99.99th=[53740] 00:36:22.750 bw ( KiB/s): min=24625, max=26624, per=32.88%, avg=26050.45, stdev=496.90, samples=20 00:36:22.750 iops : min= 192, max= 208, avg=203.50, stdev= 3.94, samples=20 00:36:22.750 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:36:22.750 cpu : usr=92.30%, sys=7.19%, ctx=22, majf=0, minf=159 00:36:22.750 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.750 filename0: (groupid=0, jobs=1): err= 0: pid=3770778: Sun Jul 14 19:08:09 2024 00:36:22.750 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10047msec) 00:36:22.750 slat (nsec): min=4862, max=47416, avg=16602.29, stdev=3090.35 00:36:22.750 clat (usec): min=11264, max=52928, avg=14491.74, stdev=1458.97 00:36:22.750 lat (usec): min=11288, max=52942, avg=14508.35, stdev=1458.81 00:36:22.750 clat percentiles (usec): 00:36:22.750 | 1.00th=[12387], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:36:22.750 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:36:22.750 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:36:22.750 | 99.00th=[16909], 99.50th=[17171], 99.90th=[22414], 99.95th=[47449], 00:36:22.750 | 99.99th=[52691] 00:36:22.750 bw ( KiB/s): min=25344, max=27392, per=33.46%, avg=26508.80, stdev=541.31, samples=20 00:36:22.750 iops : min= 198, max= 214, avg=207.10, stdev= 4.23, samples=20 00:36:22.750 lat (msec) : 20=99.76%, 50=0.19%, 100=0.05% 00:36:22.750 cpu : usr=91.05%, sys=7.85%, ctx=224, majf=0, minf=140 00:36:22.750 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.750 filename0: (groupid=0, jobs=1): err= 0: pid=3770779: Sun Jul 14 19:08:09 2024 00:36:22.750 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10048msec) 00:36:22.750 slat (usec): min=4, max=143, avg=16.82, stdev= 6.11 00:36:22.750 clat (usec): min=11075, max=51718, avg=14258.82, stdev=1497.12 00:36:22.750 lat (usec): min=11090, max=51733, avg=14275.64, stdev=1497.06 00:36:22.750 clat percentiles (usec): 00:36:22.750 | 1.00th=[12125], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:36:22.750 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:36:22.750 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:36:22.750 | 99.00th=[16712], 99.50th=[17171], 99.90th=[24249], 99.95th=[50070], 00:36:22.750 | 99.99th=[51643] 00:36:22.750 bw ( KiB/s): min=26112, max=27648, per=34.01%, avg=26944.00, stdev=422.49, samples=20 00:36:22.750 iops : min= 204, max= 216, avg=210.50, stdev= 3.30, samples=20 00:36:22.750 lat (msec) : 20=99.76%, 50=0.14%, 100=0.09% 00:36:22.750 cpu : usr=72.08%, sys=15.47%, ctx=561, majf=0, minf=81 00:36:22.750 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.750 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.750 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:22.750 00:36:22.750 Run status group 0 (all jobs): 00:36:22.750 READ: bw=77.4MiB/s (81.1MB/s), 25.3MiB/s-26.2MiB/s (26.6MB/s-27.5MB/s), io=777MiB (815MB), run=10045-10048msec 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.750 00:36:22.750 real 0m11.012s 00:36:22.750 user 0m26.607s 00:36:22.750 sys 0m3.310s 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:22.750 19:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.750 ************************************ 00:36:22.750 END TEST fio_dif_digest 00:36:22.750 ************************************ 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:22.750 19:08:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:22.750 19:08:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:22.750 rmmod nvme_tcp 00:36:22.750 rmmod nvme_fabrics 00:36:22.750 rmmod nvme_keyring 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3764738 ']' 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3764738 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3764738 ']' 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3764738 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3764738 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3764738' 00:36:22.750 killing process with pid 3764738 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3764738 00:36:22.750 19:08:09 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3764738 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:22.750 19:08:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.750 Waiting for block devices as requested 00:36:22.750 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.750 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.750 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:23.008 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:23.008 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:23.008 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:23.008 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:23.266 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:23.266 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.266 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:23.267 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:23.524 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:23.524 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:23.524 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:23.524 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:23.809 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:23.809 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.809 19:08:11 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:23.809 19:08:11 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:23.809 19:08:11 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:23.809 19:08:11 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:23.809 19:08:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.809 19:08:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.809 19:08:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.343 19:08:13 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:26.343 00:36:26.343 real 1m6.313s 00:36:26.343 user 6m25.136s 00:36:26.343 sys 0m20.222s 00:36:26.343 19:08:13 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:26.343 19:08:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:26.343 ************************************ 00:36:26.343 END TEST nvmf_dif 00:36:26.343 ************************************ 00:36:26.343 19:08:14 -- common/autotest_common.sh@1142 -- # return 0 00:36:26.343 19:08:14 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:26.343 19:08:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:26.343 19:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:26.343 19:08:14 -- common/autotest_common.sh@10 -- # set +x 00:36:26.343 ************************************ 00:36:26.343 START TEST nvmf_abort_qd_sizes 00:36:26.343 ************************************ 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:26.343 * Looking for test storage... 00:36:26.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.343 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:26.344 19:08:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:28.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.246 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:28.247 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:28.247 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:28.247 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:28.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:36:28.247 00:36:28.247 --- 10.0.0.2 ping statistics --- 00:36:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.247 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:36:28.247 00:36:28.247 --- 10.0.0.1 ping statistics --- 00:36:28.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.247 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:28.247 19:08:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:29.182 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:29.182 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:29.182 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:30.121 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3775568 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3775568 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3775568 ']' 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:30.380 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.380 [2024-07-14 19:08:18.424332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:30.380 [2024-07-14 19:08:18.424418] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.380 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.380 [2024-07-14 19:08:18.491458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:30.380 [2024-07-14 19:08:18.577095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.380 [2024-07-14 19:08:18.577150] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.380 [2024-07-14 19:08:18.577170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.380 [2024-07-14 19:08:18.577182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.380 [2024-07-14 19:08:18.577192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.380 [2024-07-14 19:08:18.577308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.380 [2024-07-14 19:08:18.577331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.380 [2024-07-14 19:08:18.577387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.380 [2024-07-14 19:08:18.577390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.637 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:30.637 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:30.638 19:08:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:30.638 ************************************ 00:36:30.638 START TEST spdk_target_abort 00:36:30.638 ************************************ 00:36:30.638 19:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:30.638 19:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:30.638 19:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:30.638 19:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.638 19:08:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.926 spdk_targetn1 00:36:33.926 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.927 [2024-07-14 19:08:21.577898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.927 [2024-07-14 19:08:21.610189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:33.927 19:08:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.927 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.215 Initializing NVMe Controllers 00:36:37.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.215 Initialization complete. Launching workers. 00:36:37.215 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11851, failed: 0 00:36:37.215 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1304, failed to submit 10547 00:36:37.215 success 722, unsuccess 582, failed 0 00:36:37.215 19:08:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.215 19:08:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.215 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.500 Initializing NVMe Controllers 00:36:40.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.500 Initialization complete. Launching workers. 00:36:40.500 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8679, failed: 0 00:36:40.500 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7453 00:36:40.500 success 356, unsuccess 870, failed 0 00:36:40.500 19:08:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.500 19:08:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.500 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.782 Initializing NVMe Controllers 00:36:43.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.782 Initialization complete. Launching workers. 00:36:43.782 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30518, failed: 0 00:36:43.782 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 27767 00:36:43.782 success 486, unsuccess 2265, failed 0 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:43.782 19:08:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3775568 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3775568 ']' 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3775568 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3775568 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:44.718 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3775568' 00:36:44.719 killing process with pid 3775568 00:36:44.719 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3775568 00:36:44.719 19:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3775568 00:36:44.976 00:36:44.976 real 0m14.317s 00:36:44.976 user 0m54.146s 00:36:44.976 sys 0m2.637s 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.976 ************************************ 00:36:44.976 END TEST spdk_target_abort 00:36:44.976 ************************************ 00:36:44.976 19:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:44.976 19:08:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:44.976 19:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:44.976 19:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.976 19:08:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.976 ************************************ 00:36:44.976 START TEST kernel_target_abort 00:36:44.976 ************************************ 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:44.976 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:44.977 19:08:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:45.908 Waiting for block devices as requested 00:36:45.908 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:46.168 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:46.168 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:46.425 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:46.425 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:46.425 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:46.425 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:46.684 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:46.684 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:46.684 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:46.684 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:46.943 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:46.943 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:46.943 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:47.202 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:47.202 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:47.202 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:47.462 No valid GPT data, bailing 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:47.462 00:36:47.462 Discovery Log Number of Records 2, Generation counter 2 00:36:47.462 =====Discovery Log Entry 0====== 00:36:47.462 trtype: tcp 00:36:47.462 adrfam: ipv4 00:36:47.462 subtype: current discovery subsystem 00:36:47.462 treq: not specified, sq flow control disable supported 00:36:47.462 portid: 1 00:36:47.462 trsvcid: 4420 00:36:47.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:47.462 traddr: 10.0.0.1 00:36:47.462 eflags: none 00:36:47.462 sectype: none 00:36:47.462 =====Discovery Log Entry 1====== 00:36:47.462 trtype: tcp 00:36:47.462 adrfam: ipv4 00:36:47.462 subtype: nvme subsystem 00:36:47.462 treq: not specified, sq flow control disable supported 00:36:47.462 portid: 1 00:36:47.462 trsvcid: 4420 00:36:47.462 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:47.462 traddr: 10.0.0.1 00:36:47.462 eflags: none 00:36:47.462 sectype: none 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.462 19:08:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.462 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.779 Initializing NVMe Controllers 00:36:50.779 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.779 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.779 Initialization complete. Launching workers. 00:36:50.779 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42662, failed: 0 00:36:50.779 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 42662, failed to submit 0 00:36:50.779 success 0, unsuccess 42662, failed 0 00:36:50.779 19:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.779 19:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:50.779 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.068 Initializing NVMe Controllers 00:36:54.068 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.068 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.068 Initialization complete. Launching workers. 00:36:54.068 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81646, failed: 0 00:36:54.068 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20578, failed to submit 61068 00:36:54.068 success 0, unsuccess 20578, failed 0 00:36:54.068 19:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.068 19:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.068 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.374 Initializing NVMe Controllers 00:36:57.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.374 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.374 Initialization complete. Launching workers. 00:36:57.374 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78327, failed: 0 00:36:57.374 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19566, failed to submit 58761 00:36:57.374 success 0, unsuccess 19566, failed 0 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:57.374 19:08:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:57.941 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:57.941 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:57.941 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:57.941 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:57.941 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:57.941 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:58.199 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:58.199 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:58.199 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:58.199 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:59.136 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:59.136 00:36:59.136 real 0m14.095s 00:36:59.136 user 0m6.020s 00:36:59.136 sys 0m3.190s 00:36:59.136 19:08:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:59.136 19:08:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.136 ************************************ 00:36:59.136 END TEST kernel_target_abort 00:36:59.136 ************************************ 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:59.136 rmmod nvme_tcp 00:36:59.136 rmmod nvme_fabrics 00:36:59.136 rmmod nvme_keyring 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3775568 ']' 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3775568 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3775568 ']' 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3775568 00:36:59.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3775568) - No such process 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3775568 is not found' 00:36:59.136 Process with pid 3775568 is not found 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:59.136 19:08:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:00.514 Waiting for block devices as requested 00:37:00.514 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:37:00.514 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:00.514 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:00.514 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:00.772 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:00.772 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:00.772 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:00.772 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:01.030 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:01.030 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:01.030 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:01.030 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:01.030 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:01.290 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:01.290 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:01.290 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:01.290 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:01.550 19:08:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.458 19:08:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:03.458 00:37:03.458 real 0m37.622s 00:37:03.458 user 1m2.228s 00:37:03.458 sys 0m9.072s 00:37:03.458 19:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:03.458 19:08:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:03.458 ************************************ 00:37:03.458 END TEST nvmf_abort_qd_sizes 00:37:03.458 ************************************ 00:37:03.716 19:08:51 -- common/autotest_common.sh@1142 -- # return 0 00:37:03.716 19:08:51 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:03.716 19:08:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:03.716 19:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:03.716 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:37:03.716 ************************************ 00:37:03.716 START TEST keyring_file 00:37:03.716 ************************************ 00:37:03.716 19:08:51 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:03.716 * Looking for test storage... 00:37:03.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:03.716 19:08:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:03.716 19:08:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.716 19:08:51 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.716 19:08:51 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.716 19:08:51 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.716 19:08:51 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.716 19:08:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.716 19:08:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.717 19:08:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.717 19:08:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:03.717 19:08:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DZDRDWzINK 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DZDRDWzINK 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DZDRDWzINK 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DZDRDWzINK 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f4PHv7GmhR 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:03.717 19:08:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f4PHv7GmhR 00:37:03.717 19:08:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f4PHv7GmhR 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.f4PHv7GmhR 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=3781311 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:03.717 19:08:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3781311 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3781311 ']' 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:03.717 19:08:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:03.717 [2024-07-14 19:08:51.929572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:03.717 [2024-07-14 19:08:51.929667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781311 ] 00:37:03.977 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.977 [2024-07-14 19:08:51.991224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.977 [2024-07-14 19:08:52.081342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:04.236 19:08:52 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.236 [2024-07-14 19:08:52.347760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.236 null0 00:37:04.236 [2024-07-14 19:08:52.379805] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:04.236 [2024-07-14 19:08:52.380335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:04.236 [2024-07-14 19:08:52.387822] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.236 19:08:52 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.236 [2024-07-14 19:08:52.399841] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:04.236 request: 00:37:04.236 { 00:37:04.236 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.236 "secure_channel": false, 00:37:04.236 "listen_address": { 00:37:04.236 "trtype": "tcp", 00:37:04.236 "traddr": "127.0.0.1", 00:37:04.236 "trsvcid": "4420" 00:37:04.236 }, 00:37:04.236 "method": "nvmf_subsystem_add_listener", 00:37:04.236 "req_id": 1 00:37:04.236 } 00:37:04.236 Got JSON-RPC error response 00:37:04.236 response: 00:37:04.236 { 00:37:04.236 "code": -32602, 00:37:04.236 "message": "Invalid parameters" 00:37:04.236 } 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:04.236 19:08:52 keyring_file -- keyring/file.sh@46 -- # bperfpid=3781315 00:37:04.236 19:08:52 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:04.236 19:08:52 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3781315 /var/tmp/bperf.sock 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3781315 ']' 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:04.236 19:08:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.237 19:08:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:04.237 19:08:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:04.237 [2024-07-14 19:08:52.444663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:04.237 [2024-07-14 19:08:52.444727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781315 ] 00:37:04.495 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.495 [2024-07-14 19:08:52.509304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.495 [2024-07-14 19:08:52.600852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.495 19:08:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:04.495 19:08:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:04.495 19:08:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:04.495 19:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:04.752 19:08:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f4PHv7GmhR 00:37:04.752 19:08:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f4PHv7GmhR 00:37:05.010 19:08:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:05.010 19:08:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:05.010 19:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.010 19:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.010 19:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.267 19:08:53 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DZDRDWzINK == \/\t\m\p\/\t\m\p\.\D\Z\D\R\D\W\z\I\N\K ]] 00:37:05.267 19:08:53 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:05.267 19:08:53 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:05.267 19:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.267 19:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.267 19:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:05.524 19:08:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.f4PHv7GmhR == \/\t\m\p\/\t\m\p\.\f\4\P\H\v\7\G\m\h\R ]] 00:37:05.524 19:08:53 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:05.524 19:08:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:05.524 19:08:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.524 19:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.524 19:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.524 19:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.782 19:08:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:05.782 19:08:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:05.782 19:08:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:05.782 19:08:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.782 19:08:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.782 19:08:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.782 19:08:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:06.039 19:08:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:06.039 19:08:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.039 19:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.296 [2024-07-14 19:08:54.427472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:06.296 nvme0n1 00:37:06.296 19:08:54 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:06.296 19:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.296 19:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.296 19:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.296 19:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.296 19:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.553 19:08:54 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:06.553 19:08:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:06.553 19:08:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:06.553 19:08:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.553 19:08:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.553 19:08:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:06.553 19:08:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.811 19:08:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:06.811 19:08:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:07.068 Running I/O for 1 seconds... 00:37:08.006 00:37:08.006 Latency(us) 00:37:08.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.006 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:08.006 nvme0n1 : 1.01 7540.81 29.46 0.00 0.00 16871.07 9029.40 29321.29 00:37:08.006 =================================================================================================================== 00:37:08.006 Total : 7540.81 29.46 0.00 0.00 16871.07 9029.40 29321.29 00:37:08.006 0 00:37:08.006 19:08:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:08.006 19:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:08.263 19:08:56 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:08.263 19:08:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:08.263 19:08:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.263 19:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.263 19:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:08.263 19:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.521 19:08:56 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:08.521 19:08:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:08.521 19:08:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:08.521 19:08:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:08.521 19:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:08.521 19:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:08.521 19:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.778 19:08:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:08.778 19:08:56 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.778 19:08:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:08.778 19:08:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:09.039 [2024-07-14 19:08:57.182116] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:09.039 [2024-07-14 19:08:57.182355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb28f0 (107): Transport endpoint is not connected 00:37:09.039 [2024-07-14 19:08:57.183341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb28f0 (9): Bad file descriptor 00:37:09.039 [2024-07-14 19:08:57.184339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:09.039 [2024-07-14 19:08:57.184371] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:09.039 [2024-07-14 19:08:57.184395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:09.039 request: 00:37:09.039 { 00:37:09.039 "name": "nvme0", 00:37:09.039 "trtype": "tcp", 00:37:09.039 "traddr": "127.0.0.1", 00:37:09.039 "adrfam": "ipv4", 00:37:09.039 "trsvcid": "4420", 00:37:09.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.039 "prchk_reftag": false, 00:37:09.039 "prchk_guard": false, 00:37:09.039 "hdgst": false, 00:37:09.039 "ddgst": false, 00:37:09.039 "psk": "key1", 00:37:09.039 "method": "bdev_nvme_attach_controller", 00:37:09.039 "req_id": 1 00:37:09.039 } 00:37:09.039 Got JSON-RPC error response 00:37:09.039 response: 00:37:09.039 { 00:37:09.039 "code": -5, 00:37:09.039 "message": "Input/output error" 00:37:09.039 } 00:37:09.039 19:08:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:09.039 19:08:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:09.039 19:08:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:09.039 19:08:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:09.039 19:08:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:09.039 19:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.039 19:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.039 19:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.039 19:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.039 19:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:09.297 19:08:57 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:09.297 19:08:57 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:09.297 19:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:09.297 19:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.297 19:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.297 19:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.297 19:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:09.556 19:08:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:09.556 19:08:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:09.556 19:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:09.814 19:08:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:09.814 19:08:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:10.080 19:08:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:10.080 19:08:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:10.080 19:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.376 19:08:58 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:10.376 19:08:58 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DZDRDWzINK 00:37:10.376 19:08:58 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:10.376 19:08:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.376 19:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.634 [2024-07-14 19:08:58.686499] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DZDRDWzINK': 0100660 00:37:10.634 [2024-07-14 19:08:58.686539] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:10.634 request: 00:37:10.634 { 00:37:10.634 "name": "key0", 00:37:10.634 "path": "/tmp/tmp.DZDRDWzINK", 00:37:10.634 "method": "keyring_file_add_key", 00:37:10.634 "req_id": 1 00:37:10.634 } 00:37:10.634 Got JSON-RPC error response 00:37:10.634 response: 00:37:10.634 { 00:37:10.634 "code": -1, 00:37:10.634 "message": "Operation not permitted" 00:37:10.634 } 00:37:10.634 19:08:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:10.634 19:08:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:10.634 19:08:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:10.634 19:08:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:10.634 19:08:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DZDRDWzINK 00:37:10.634 19:08:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.634 19:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DZDRDWzINK 00:37:10.891 19:08:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DZDRDWzINK 00:37:10.891 19:08:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:10.891 19:08:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.891 19:08:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.891 19:08:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.891 19:08:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.891 19:08:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.150 19:08:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:11.150 19:08:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.150 19:08:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.150 19:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.408 [2024-07-14 19:08:59.476652] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DZDRDWzINK': No such file or directory 00:37:11.408 [2024-07-14 19:08:59.476696] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:11.408 [2024-07-14 19:08:59.476737] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:11.408 [2024-07-14 19:08:59.476751] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:11.408 [2024-07-14 19:08:59.476763] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:11.408 request: 00:37:11.408 { 00:37:11.408 "name": "nvme0", 00:37:11.408 "trtype": "tcp", 00:37:11.408 "traddr": "127.0.0.1", 00:37:11.408 "adrfam": "ipv4", 00:37:11.408 "trsvcid": "4420", 00:37:11.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.408 "prchk_reftag": false, 00:37:11.408 "prchk_guard": false, 00:37:11.408 "hdgst": false, 00:37:11.408 "ddgst": false, 00:37:11.408 "psk": "key0", 00:37:11.408 "method": "bdev_nvme_attach_controller", 00:37:11.408 "req_id": 1 00:37:11.408 } 00:37:11.408 Got JSON-RPC error response 00:37:11.408 response: 00:37:11.408 { 00:37:11.408 "code": -19, 00:37:11.408 "message": "No such device" 00:37:11.408 } 00:37:11.408 19:08:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:11.408 19:08:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:11.408 19:08:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:11.408 19:08:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:11.408 19:08:59 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:11.408 19:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:11.667 19:08:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PjZYvz8v6F 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:11.667 19:08:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PjZYvz8v6F 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PjZYvz8v6F 00:37:11.667 19:08:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.PjZYvz8v6F 00:37:11.667 19:08:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PjZYvz8v6F 00:37:11.667 19:08:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PjZYvz8v6F 00:37:11.926 19:09:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:11.926 19:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.184 nvme0n1 00:37:12.184 19:09:00 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:12.184 19:09:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.184 19:09:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.184 19:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.184 19:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.184 19:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.442 19:09:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:12.442 19:09:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:12.442 19:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:12.700 19:09:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:12.700 19:09:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:12.700 19:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.700 19:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:12.700 19:09:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.958 19:09:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:12.958 19:09:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:12.958 19:09:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:12.958 19:09:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:12.958 19:09:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:12.958 19:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.958 19:09:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:13.216 19:09:01 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:13.216 19:09:01 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:13.216 19:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:13.474 19:09:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:13.474 19:09:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:13.474 19:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.732 19:09:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:13.732 19:09:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PjZYvz8v6F 00:37:13.732 19:09:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PjZYvz8v6F 00:37:13.991 19:09:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.f4PHv7GmhR 00:37:13.991 19:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.f4PHv7GmhR 00:37:14.249 19:09:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.249 19:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:14.509 nvme0n1 00:37:14.768 19:09:02 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:14.768 19:09:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:15.029 19:09:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:15.029 "subsystems": [ 00:37:15.029 { 00:37:15.029 "subsystem": "keyring", 00:37:15.029 "config": [ 00:37:15.029 { 00:37:15.029 "method": "keyring_file_add_key", 00:37:15.029 "params": { 00:37:15.029 "name": "key0", 00:37:15.029 "path": "/tmp/tmp.PjZYvz8v6F" 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "keyring_file_add_key", 00:37:15.029 "params": { 00:37:15.029 "name": "key1", 00:37:15.029 "path": "/tmp/tmp.f4PHv7GmhR" 00:37:15.029 } 00:37:15.029 } 00:37:15.029 ] 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "subsystem": "iobuf", 00:37:15.029 "config": [ 00:37:15.029 { 00:37:15.029 "method": "iobuf_set_options", 00:37:15.029 "params": { 00:37:15.029 "small_pool_count": 8192, 00:37:15.029 "large_pool_count": 1024, 00:37:15.029 "small_bufsize": 8192, 00:37:15.029 "large_bufsize": 135168 00:37:15.029 } 00:37:15.029 } 00:37:15.029 ] 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "subsystem": "sock", 00:37:15.029 "config": [ 00:37:15.029 { 00:37:15.029 "method": "sock_set_default_impl", 00:37:15.029 "params": { 00:37:15.029 "impl_name": "posix" 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "sock_impl_set_options", 00:37:15.029 "params": { 00:37:15.029 "impl_name": "ssl", 00:37:15.029 "recv_buf_size": 4096, 00:37:15.029 "send_buf_size": 4096, 00:37:15.029 "enable_recv_pipe": true, 00:37:15.029 "enable_quickack": false, 00:37:15.029 "enable_placement_id": 0, 00:37:15.029 "enable_zerocopy_send_server": true, 00:37:15.029 "enable_zerocopy_send_client": false, 00:37:15.029 "zerocopy_threshold": 0, 00:37:15.029 "tls_version": 0, 00:37:15.029 "enable_ktls": false 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "sock_impl_set_options", 00:37:15.029 "params": { 00:37:15.029 "impl_name": "posix", 00:37:15.029 "recv_buf_size": 2097152, 00:37:15.029 "send_buf_size": 2097152, 00:37:15.029 "enable_recv_pipe": true, 00:37:15.029 "enable_quickack": false, 00:37:15.029 "enable_placement_id": 0, 00:37:15.029 "enable_zerocopy_send_server": true, 00:37:15.029 "enable_zerocopy_send_client": false, 00:37:15.029 "zerocopy_threshold": 0, 00:37:15.029 "tls_version": 0, 00:37:15.029 "enable_ktls": false 00:37:15.029 } 00:37:15.029 } 00:37:15.029 ] 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "subsystem": "vmd", 00:37:15.029 "config": [] 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "subsystem": "accel", 00:37:15.029 "config": [ 00:37:15.029 { 00:37:15.029 "method": "accel_set_options", 00:37:15.029 "params": { 00:37:15.029 "small_cache_size": 128, 00:37:15.029 "large_cache_size": 16, 00:37:15.029 "task_count": 2048, 00:37:15.029 "sequence_count": 2048, 00:37:15.029 "buf_count": 2048 00:37:15.029 } 00:37:15.029 } 00:37:15.029 ] 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "subsystem": "bdev", 00:37:15.029 "config": [ 00:37:15.029 { 00:37:15.029 "method": "bdev_set_options", 00:37:15.029 "params": { 00:37:15.029 "bdev_io_pool_size": 65535, 00:37:15.029 "bdev_io_cache_size": 256, 00:37:15.029 "bdev_auto_examine": true, 00:37:15.029 "iobuf_small_cache_size": 128, 00:37:15.029 "iobuf_large_cache_size": 16 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "bdev_raid_set_options", 00:37:15.029 "params": { 00:37:15.029 "process_window_size_kb": 1024 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "bdev_iscsi_set_options", 00:37:15.029 "params": { 00:37:15.029 "timeout_sec": 30 00:37:15.029 } 00:37:15.029 }, 00:37:15.029 { 00:37:15.029 "method": "bdev_nvme_set_options", 00:37:15.029 "params": { 00:37:15.029 "action_on_timeout": "none", 00:37:15.029 "timeout_us": 0, 00:37:15.029 "timeout_admin_us": 0, 00:37:15.029 "keep_alive_timeout_ms": 10000, 00:37:15.029 "arbitration_burst": 0, 00:37:15.029 "low_priority_weight": 0, 00:37:15.029 "medium_priority_weight": 0, 00:37:15.029 "high_priority_weight": 0, 00:37:15.029 "nvme_adminq_poll_period_us": 10000, 00:37:15.029 "nvme_ioq_poll_period_us": 0, 00:37:15.029 "io_queue_requests": 512, 00:37:15.029 "delay_cmd_submit": true, 00:37:15.029 "transport_retry_count": 4, 00:37:15.029 "bdev_retry_count": 3, 00:37:15.029 "transport_ack_timeout": 0, 00:37:15.029 "ctrlr_loss_timeout_sec": 0, 00:37:15.029 "reconnect_delay_sec": 0, 00:37:15.029 "fast_io_fail_timeout_sec": 0, 00:37:15.029 "disable_auto_failback": false, 00:37:15.029 "generate_uuids": false, 00:37:15.029 "transport_tos": 0, 00:37:15.029 "nvme_error_stat": false, 00:37:15.029 "rdma_srq_size": 0, 00:37:15.029 "io_path_stat": false, 00:37:15.029 "allow_accel_sequence": false, 00:37:15.029 "rdma_max_cq_size": 0, 00:37:15.029 "rdma_cm_event_timeout_ms": 0, 00:37:15.030 "dhchap_digests": [ 00:37:15.030 "sha256", 00:37:15.030 "sha384", 00:37:15.030 "sha512" 00:37:15.030 ], 00:37:15.030 "dhchap_dhgroups": [ 00:37:15.030 "null", 00:37:15.030 "ffdhe2048", 00:37:15.030 "ffdhe3072", 00:37:15.030 "ffdhe4096", 00:37:15.030 "ffdhe6144", 00:37:15.030 "ffdhe8192" 00:37:15.030 ] 00:37:15.030 } 00:37:15.030 }, 00:37:15.030 { 00:37:15.030 "method": "bdev_nvme_attach_controller", 00:37:15.030 "params": { 00:37:15.030 "name": "nvme0", 00:37:15.030 "trtype": "TCP", 00:37:15.030 "adrfam": "IPv4", 00:37:15.030 "traddr": "127.0.0.1", 00:37:15.030 "trsvcid": "4420", 00:37:15.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.030 "prchk_reftag": false, 00:37:15.030 "prchk_guard": false, 00:37:15.030 "ctrlr_loss_timeout_sec": 0, 00:37:15.030 "reconnect_delay_sec": 0, 00:37:15.030 "fast_io_fail_timeout_sec": 0, 00:37:15.030 "psk": "key0", 00:37:15.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.030 "hdgst": false, 00:37:15.030 "ddgst": false 00:37:15.030 } 00:37:15.030 }, 00:37:15.030 { 00:37:15.030 "method": "bdev_nvme_set_hotplug", 00:37:15.030 "params": { 00:37:15.030 "period_us": 100000, 00:37:15.030 "enable": false 00:37:15.030 } 00:37:15.030 }, 00:37:15.030 { 00:37:15.030 "method": "bdev_wait_for_examine" 00:37:15.030 } 00:37:15.030 ] 00:37:15.030 }, 00:37:15.030 { 00:37:15.030 "subsystem": "nbd", 00:37:15.030 "config": [] 00:37:15.030 } 00:37:15.030 ] 00:37:15.030 }' 00:37:15.030 19:09:03 keyring_file -- keyring/file.sh@114 -- # killprocess 3781315 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3781315 ']' 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3781315 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3781315 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3781315' 00:37:15.030 killing process with pid 3781315 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@967 -- # kill 3781315 00:37:15.030 Received shutdown signal, test time was about 1.000000 seconds 00:37:15.030 00:37:15.030 Latency(us) 00:37:15.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.030 =================================================================================================================== 00:37:15.030 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.030 19:09:03 keyring_file -- common/autotest_common.sh@972 -- # wait 3781315 00:37:15.288 19:09:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=3782889 00:37:15.288 19:09:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3782889 /var/tmp/bperf.sock 00:37:15.288 19:09:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3782889 ']' 00:37:15.288 19:09:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:15.288 19:09:03 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:15.288 19:09:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:15.288 19:09:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:15.288 19:09:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:15.288 "subsystems": [ 00:37:15.288 { 00:37:15.288 "subsystem": "keyring", 00:37:15.288 "config": [ 00:37:15.288 { 00:37:15.288 "method": "keyring_file_add_key", 00:37:15.288 "params": { 00:37:15.288 "name": "key0", 00:37:15.288 "path": "/tmp/tmp.PjZYvz8v6F" 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "keyring_file_add_key", 00:37:15.288 "params": { 00:37:15.288 "name": "key1", 00:37:15.288 "path": "/tmp/tmp.f4PHv7GmhR" 00:37:15.288 } 00:37:15.288 } 00:37:15.288 ] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "iobuf", 00:37:15.288 "config": [ 00:37:15.288 { 00:37:15.288 "method": "iobuf_set_options", 00:37:15.288 "params": { 00:37:15.288 "small_pool_count": 8192, 00:37:15.288 "large_pool_count": 1024, 00:37:15.288 "small_bufsize": 8192, 00:37:15.288 "large_bufsize": 135168 00:37:15.288 } 00:37:15.288 } 00:37:15.288 ] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "sock", 00:37:15.288 "config": [ 00:37:15.288 { 00:37:15.288 "method": "sock_set_default_impl", 00:37:15.288 "params": { 00:37:15.288 "impl_name": "posix" 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "sock_impl_set_options", 00:37:15.288 "params": { 00:37:15.288 "impl_name": "ssl", 00:37:15.288 "recv_buf_size": 4096, 00:37:15.288 "send_buf_size": 4096, 00:37:15.288 "enable_recv_pipe": true, 00:37:15.288 "enable_quickack": false, 00:37:15.288 "enable_placement_id": 0, 00:37:15.288 "enable_zerocopy_send_server": true, 00:37:15.288 "enable_zerocopy_send_client": false, 00:37:15.288 "zerocopy_threshold": 0, 00:37:15.288 "tls_version": 0, 00:37:15.288 "enable_ktls": false 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "sock_impl_set_options", 00:37:15.288 "params": { 00:37:15.288 "impl_name": "posix", 00:37:15.288 "recv_buf_size": 2097152, 00:37:15.288 "send_buf_size": 2097152, 00:37:15.288 "enable_recv_pipe": true, 00:37:15.288 "enable_quickack": false, 00:37:15.288 "enable_placement_id": 0, 00:37:15.288 "enable_zerocopy_send_server": true, 00:37:15.288 "enable_zerocopy_send_client": false, 00:37:15.288 "zerocopy_threshold": 0, 00:37:15.288 "tls_version": 0, 00:37:15.288 "enable_ktls": false 00:37:15.288 } 00:37:15.288 } 00:37:15.288 ] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "vmd", 00:37:15.288 "config": [] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "accel", 00:37:15.288 "config": [ 00:37:15.288 { 00:37:15.288 "method": "accel_set_options", 00:37:15.288 "params": { 00:37:15.288 "small_cache_size": 128, 00:37:15.288 "large_cache_size": 16, 00:37:15.288 "task_count": 2048, 00:37:15.288 "sequence_count": 2048, 00:37:15.288 "buf_count": 2048 00:37:15.288 } 00:37:15.288 } 00:37:15.288 ] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "bdev", 00:37:15.288 "config": [ 00:37:15.288 { 00:37:15.288 "method": "bdev_set_options", 00:37:15.288 "params": { 00:37:15.288 "bdev_io_pool_size": 65535, 00:37:15.288 "bdev_io_cache_size": 256, 00:37:15.288 "bdev_auto_examine": true, 00:37:15.288 "iobuf_small_cache_size": 128, 00:37:15.288 "iobuf_large_cache_size": 16 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_raid_set_options", 00:37:15.288 "params": { 00:37:15.288 "process_window_size_kb": 1024 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_iscsi_set_options", 00:37:15.288 "params": { 00:37:15.288 "timeout_sec": 30 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_nvme_set_options", 00:37:15.288 "params": { 00:37:15.288 "action_on_timeout": "none", 00:37:15.288 "timeout_us": 0, 00:37:15.288 "timeout_admin_us": 0, 00:37:15.288 "keep_alive_timeout_ms": 10000, 00:37:15.288 "arbitration_burst": 0, 00:37:15.288 "low_priority_weight": 0, 00:37:15.288 "medium_priority_weight": 0, 00:37:15.288 "high_priority_weight": 0, 00:37:15.288 "nvme_adminq_poll_period_us": 10000, 00:37:15.288 "nvme_ioq_poll_period_us": 0, 00:37:15.288 "io_queue_requests": 512, 00:37:15.288 "delay_cmd_submit": true, 00:37:15.288 "transport_retry_count": 4, 00:37:15.288 "bdev_retry_count": 3, 00:37:15.288 "transport_ack_timeout": 0, 00:37:15.288 "ctrlr_loss_timeout_sec": 0, 00:37:15.288 "reconnect_delay_sec": 0, 00:37:15.288 "fast_io_fail_timeout_sec": 0, 00:37:15.288 "disable_auto_failback": false, 00:37:15.288 "generate_uuids": false, 00:37:15.288 "transport_tos": 0, 00:37:15.288 "nvme_error_stat": false, 00:37:15.288 "rdma_srq_size": 0, 00:37:15.288 "io_path_stat": false, 00:37:15.288 "allow_accel_sequence": false, 00:37:15.288 "rdma_max_cq_size": 0, 00:37:15.288 "rdma_cm_event_timeout_ms": 0, 00:37:15.288 "dhchap_digests": [ 00:37:15.288 "sha256", 00:37:15.288 "sha384", 00:37:15.288 "sha512" 00:37:15.288 ], 00:37:15.288 "dhchap_dhgroups": [ 00:37:15.288 "null", 00:37:15.288 "ffdhe2048", 00:37:15.288 "ffdhe3072", 00:37:15.288 "ffdhe4096", 00:37:15.288 "ffdhe6144", 00:37:15.288 "ffdhe8192" 00:37:15.288 ] 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_nvme_attach_controller", 00:37:15.288 "params": { 00:37:15.288 "name": "nvme0", 00:37:15.288 "trtype": "TCP", 00:37:15.288 "adrfam": "IPv4", 00:37:15.288 "traddr": "127.0.0.1", 00:37:15.288 "trsvcid": "4420", 00:37:15.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.288 "prchk_reftag": false, 00:37:15.288 "prchk_guard": false, 00:37:15.288 "ctrlr_loss_timeout_sec": 0, 00:37:15.288 "reconnect_delay_sec": 0, 00:37:15.288 "fast_io_fail_timeout_sec": 0, 00:37:15.288 "psk": "key0", 00:37:15.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.288 "hdgst": false, 00:37:15.288 "ddgst": false 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_nvme_set_hotplug", 00:37:15.288 "params": { 00:37:15.288 "period_us": 100000, 00:37:15.288 "enable": false 00:37:15.288 } 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "method": "bdev_wait_for_examine" 00:37:15.288 } 00:37:15.288 ] 00:37:15.288 }, 00:37:15.288 { 00:37:15.288 "subsystem": "nbd", 00:37:15.289 "config": [] 00:37:15.289 } 00:37:15.289 ] 00:37:15.289 }' 00:37:15.289 19:09:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:15.289 19:09:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:15.289 [2024-07-14 19:09:03.342361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:15.289 [2024-07-14 19:09:03.342456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782889 ] 00:37:15.289 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.289 [2024-07-14 19:09:03.404904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.289 [2024-07-14 19:09:03.493422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:15.547 [2024-07-14 19:09:03.683419] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:16.114 19:09:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.114 19:09:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:16.114 19:09:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:16.114 19:09:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.114 19:09:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:16.372 19:09:04 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:16.372 19:09:04 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:16.372 19:09:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:16.372 19:09:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.372 19:09:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.372 19:09:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.372 19:09:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:16.629 19:09:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:16.629 19:09:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:16.629 19:09:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:16.629 19:09:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:16.629 19:09:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:16.629 19:09:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:16.629 19:09:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:16.887 19:09:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:16.887 19:09:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:16.887 19:09:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:16.887 19:09:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:17.145 19:09:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:17.145 19:09:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:17.145 19:09:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.PjZYvz8v6F /tmp/tmp.f4PHv7GmhR 00:37:17.145 19:09:05 keyring_file -- keyring/file.sh@20 -- # killprocess 3782889 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3782889 ']' 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3782889 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782889 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782889' 00:37:17.145 killing process with pid 3782889 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@967 -- # kill 3782889 00:37:17.145 Received shutdown signal, test time was about 1.000000 seconds 00:37:17.145 00:37:17.145 Latency(us) 00:37:17.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.145 =================================================================================================================== 00:37:17.145 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:17.145 19:09:05 keyring_file -- common/autotest_common.sh@972 -- # wait 3782889 00:37:17.405 19:09:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3781311 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3781311 ']' 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3781311 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3781311 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3781311' 00:37:17.405 killing process with pid 3781311 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@967 -- # kill 3781311 00:37:17.405 [2024-07-14 19:09:05.579724] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:17.405 19:09:05 keyring_file -- common/autotest_common.sh@972 -- # wait 3781311 00:37:17.975 00:37:17.975 real 0m14.286s 00:37:17.975 user 0m35.665s 00:37:17.975 sys 0m3.306s 00:37:17.975 19:09:05 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.975 19:09:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.975 ************************************ 00:37:17.975 END TEST keyring_file 00:37:17.975 ************************************ 00:37:17.975 19:09:06 -- common/autotest_common.sh@1142 -- # return 0 00:37:17.975 19:09:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:17.975 19:09:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.975 19:09:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:17.975 19:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.975 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:37:17.975 ************************************ 00:37:17.975 START TEST keyring_linux 00:37:17.975 ************************************ 00:37:17.975 19:09:06 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:17.975 * Looking for test storage... 00:37:17.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:17.975 19:09:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.975 19:09:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.975 19:09:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.975 19:09:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.975 19:09:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.975 19:09:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.975 19:09:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.975 19:09:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:17.975 19:09:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:17.975 19:09:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:17.975 19:09:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:17.975 19:09:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:17.975 19:09:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.975 19:09:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:17.976 /tmp/:spdk-test:key0 00:37:17.976 19:09:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:17.976 19:09:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:17.976 19:09:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:17.976 /tmp/:spdk-test:key1 00:37:17.976 19:09:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3783266 00:37:18.234 19:09:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:18.234 19:09:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3783266 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3783266 ']' 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:18.234 19:09:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.234 [2024-07-14 19:09:06.251282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:18.234 [2024-07-14 19:09:06.251361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783266 ] 00:37:18.234 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.234 [2024-07-14 19:09:06.315953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.234 [2024-07-14 19:09:06.407934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.494 19:09:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.494 19:09:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.494 19:09:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:18.494 19:09:06 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.494 19:09:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.494 [2024-07-14 19:09:06.664972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.494 null0 00:37:18.494 [2024-07-14 19:09:06.697001] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:18.494 [2024-07-14 19:09:06.697513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:18.494 19:09:06 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.494 19:09:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:18.494 1049662882 00:37:18.494 19:09:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:18.753 594028251 00:37:18.753 19:09:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3783395 00:37:18.753 19:09:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3783395 /var/tmp/bperf.sock 00:37:18.753 19:09:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3783395 ']' 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:18.753 [2024-07-14 19:09:06.767274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:18.753 [2024-07-14 19:09:06.767353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783395 ] 00:37:18.753 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.753 [2024-07-14 19:09:06.830290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.753 [2024-07-14 19:09:06.922454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.753 19:09:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:18.753 19:09:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:18.753 19:09:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:19.011 19:09:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:19.011 19:09:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:19.577 19:09:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:19.577 19:09:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:19.835 [2024-07-14 19:09:07.815592] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:19.835 nvme0n1 00:37:19.835 19:09:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:19.835 19:09:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:19.835 19:09:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:19.835 19:09:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:19.836 19:09:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.836 19:09:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:20.094 19:09:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:20.094 19:09:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:20.094 19:09:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:20.094 19:09:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:20.094 19:09:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.094 19:09:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.094 19:09:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:20.352 19:09:08 keyring_linux -- keyring/linux.sh@25 -- # sn=1049662882 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 1049662882 == \1\0\4\9\6\6\2\8\8\2 ]] 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1049662882 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:20.353 19:09:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:20.353 Running I/O for 1 seconds... 00:37:21.723 00:37:21.723 Latency(us) 00:37:21.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.723 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:21.723 nvme0n1 : 1.01 7687.68 30.03 0.00 0.00 16514.00 7767.23 24758.04 00:37:21.724 =================================================================================================================== 00:37:21.724 Total : 7687.68 30.03 0.00 0.00 16514.00 7767.23 24758.04 00:37:21.724 0 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:21.724 19:09:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:21.724 19:09:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.724 19:09:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:21.981 19:09:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:21.981 19:09:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:21.981 19:09:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:21.981 19:09:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.981 19:09:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:21.982 19:09:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:21.982 19:09:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:22.240 [2024-07-14 19:09:10.285424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:22.240 [2024-07-14 19:09:10.285996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1c860 (107): Transport endpoint is not connected 00:37:22.240 [2024-07-14 19:09:10.286985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1c860 (9): Bad file descriptor 00:37:22.240 [2024-07-14 19:09:10.287990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:22.240 [2024-07-14 19:09:10.288010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:22.240 [2024-07-14 19:09:10.288023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:22.240 request: 00:37:22.240 { 00:37:22.240 "name": "nvme0", 00:37:22.240 "trtype": "tcp", 00:37:22.240 "traddr": "127.0.0.1", 00:37:22.240 "adrfam": "ipv4", 00:37:22.240 "trsvcid": "4420", 00:37:22.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.240 "prchk_reftag": false, 00:37:22.240 "prchk_guard": false, 00:37:22.240 "hdgst": false, 00:37:22.240 "ddgst": false, 00:37:22.240 "psk": ":spdk-test:key1", 00:37:22.240 "method": "bdev_nvme_attach_controller", 00:37:22.240 "req_id": 1 00:37:22.240 } 00:37:22.240 Got JSON-RPC error response 00:37:22.240 response: 00:37:22.240 { 00:37:22.240 "code": -5, 00:37:22.240 "message": "Input/output error" 00:37:22.240 } 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@33 -- # sn=1049662882 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1049662882 00:37:22.240 1 links removed 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@33 -- # sn=594028251 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 594028251 00:37:22.240 1 links removed 00:37:22.240 19:09:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3783395 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3783395 ']' 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3783395 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3783395 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3783395' 00:37:22.240 killing process with pid 3783395 00:37:22.240 19:09:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 3783395 00:37:22.240 Received shutdown signal, test time was about 1.000000 seconds 00:37:22.240 00:37:22.240 Latency(us) 00:37:22.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.241 =================================================================================================================== 00:37:22.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:22.241 19:09:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 3783395 00:37:22.500 19:09:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3783266 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3783266 ']' 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3783266 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3783266 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3783266' 00:37:22.500 killing process with pid 3783266 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 3783266 00:37:22.500 19:09:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 3783266 00:37:23.065 00:37:23.065 real 0m4.960s 00:37:23.065 user 0m9.470s 00:37:23.065 sys 0m1.654s 00:37:23.065 19:09:11 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:23.065 19:09:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:23.065 ************************************ 00:37:23.065 END TEST keyring_linux 00:37:23.065 ************************************ 00:37:23.065 19:09:11 -- common/autotest_common.sh@1142 -- # return 0 00:37:23.065 19:09:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:23.065 19:09:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:23.065 19:09:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:23.065 19:09:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:23.065 19:09:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:23.065 19:09:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:23.065 19:09:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:23.065 19:09:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:23.065 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:37:23.066 19:09:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:23.066 19:09:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:23.066 19:09:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:23.066 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:37:24.967 INFO: APP EXITING 00:37:24.967 INFO: killing all VMs 00:37:24.967 INFO: killing vhost app 00:37:24.967 INFO: EXIT DONE 00:37:25.900 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:25.900 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:25.900 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:25.900 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:25.900 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:25.900 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:25.900 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:25.900 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:25.900 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:25.900 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:25.900 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:25.900 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:25.900 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:25.900 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:25.900 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:25.900 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:25.900 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:27.278 Cleaning 00:37:27.278 Removing: /var/run/dpdk/spdk0/config 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:27.278 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:27.278 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:27.278 Removing: /var/run/dpdk/spdk1/config 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:27.278 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:27.278 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:27.278 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:27.278 Removing: /var/run/dpdk/spdk2/config 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:27.278 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:27.278 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:27.278 Removing: /var/run/dpdk/spdk3/config 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:27.278 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:27.278 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:27.278 Removing: /var/run/dpdk/spdk4/config 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:27.278 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:27.278 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:27.278 Removing: /dev/shm/bdev_svc_trace.1 00:37:27.278 Removing: /dev/shm/nvmf_trace.0 00:37:27.278 Removing: /dev/shm/spdk_tgt_trace.pid3463227 00:37:27.278 Removing: /var/run/dpdk/spdk0 00:37:27.278 Removing: /var/run/dpdk/spdk1 00:37:27.278 Removing: /var/run/dpdk/spdk2 00:37:27.278 Removing: /var/run/dpdk/spdk3 00:37:27.278 Removing: /var/run/dpdk/spdk4 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3461675 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3462409 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3463227 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3463658 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3464345 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3464487 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3465205 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3465215 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3465459 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3466710 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3467694 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3467877 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3468178 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3468388 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3468578 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3468734 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3468888 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3469077 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3469382 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3471733 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3471901 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472063 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472066 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472497 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472500 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472933 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3472942 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3473225 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3473242 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3473404 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3473409 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3473903 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474055 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474248 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474416 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474443 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474629 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474786 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3474939 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3475218 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3475374 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3475533 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3475745 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3476047 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3476232 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3476399 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3476671 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3476834 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3477410 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3477685 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3477919 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3478085 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3478239 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3478518 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3478676 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3478841 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3479046 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3479180 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3479386 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3481491 00:37:27.278 Removing: /var/run/dpdk/spdk_pid3534341 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3537068 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3544431 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3547698 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3550043 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3550449 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3554417 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3558124 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3558131 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3558783 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3559399 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3559978 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3560375 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3560388 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3560643 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3560761 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3560779 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3561372 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3561972 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3562633 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3563046 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3563161 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3563303 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3564180 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3564901 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3570858 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3571016 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3573648 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3577243 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3579397 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3585659 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3590846 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3592032 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3592706 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3603000 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3605708 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3631116 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3633978 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3635156 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3636350 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3636481 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3636621 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3636645 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3637076 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3638389 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3638991 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3639417 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3641022 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3641386 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3641887 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3644273 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3647566 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3651054 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3674536 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3677198 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3680949 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3681885 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3682975 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3685624 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3688368 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3692568 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3692570 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3695340 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3695473 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3695609 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3695990 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3696000 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3697075 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3698252 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3699427 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3700609 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3701783 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3702987 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3706757 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3707214 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3708495 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3709231 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3712812 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3714783 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3718801 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3722262 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3728485 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3732829 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3732831 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3745026 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3745433 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3745953 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3746369 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3746942 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3747356 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3747762 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3748195 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3750663 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3750826 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3755280 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3755380 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3756984 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3762011 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3762021 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3764872 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3766183 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3767587 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3768444 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3769846 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3770601 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3775884 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3776255 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3776643 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3778203 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3778600 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3778880 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3781311 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3781315 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3782889 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3783266 00:37:27.537 Removing: /var/run/dpdk/spdk_pid3783395 00:37:27.537 Clean 00:37:27.823 19:09:15 -- common/autotest_common.sh@1451 -- # return 0 00:37:27.823 19:09:15 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:27.823 19:09:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:27.823 19:09:15 -- common/autotest_common.sh@10 -- # set +x 00:37:27.823 19:09:15 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:27.823 19:09:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:27.823 19:09:15 -- common/autotest_common.sh@10 -- # set +x 00:37:27.823 19:09:15 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:27.823 19:09:15 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:27.823 19:09:15 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:27.823 19:09:15 -- spdk/autotest.sh@391 -- # hash lcov 00:37:27.823 19:09:15 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:27.823 19:09:15 -- spdk/autotest.sh@393 -- # hostname 00:37:27.823 19:09:15 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:27.823 geninfo: WARNING: invalid characters removed from testname! 00:37:59.901 19:09:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:59.901 19:09:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:02.441 19:09:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:05.735 19:09:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:08.275 19:09:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:11.567 19:09:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.108 19:10:02 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:14.108 19:10:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.108 19:10:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:14.108 19:10:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.108 19:10:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.108 19:10:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.108 19:10:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.108 19:10:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.108 19:10:02 -- paths/export.sh@5 -- $ export PATH 00:38:14.108 19:10:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.108 19:10:02 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:14.108 19:10:02 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:14.108 19:10:02 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720977002.XXXXXX 00:38:14.108 19:10:02 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720977002.k1KuO2 00:38:14.108 19:10:02 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:14.108 19:10:02 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:38:14.108 19:10:02 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:14.108 19:10:02 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:14.108 19:10:02 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:14.108 19:10:02 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:14.108 19:10:02 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:14.108 19:10:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:14.108 19:10:02 -- common/autotest_common.sh@10 -- $ set +x 00:38:14.108 19:10:02 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:14.108 19:10:02 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:14.108 19:10:02 -- pm/common@17 -- $ local monitor 00:38:14.108 19:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.108 19:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.108 19:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.108 19:10:02 -- pm/common@21 -- $ date +%s 00:38:14.108 19:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.108 19:10:02 -- pm/common@21 -- $ date +%s 00:38:14.108 19:10:02 -- pm/common@25 -- $ sleep 1 00:38:14.108 19:10:02 -- pm/common@21 -- $ date +%s 00:38:14.108 19:10:02 -- pm/common@21 -- $ date +%s 00:38:14.108 19:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720977002 00:38:14.108 19:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720977002 00:38:14.108 19:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720977002 00:38:14.108 19:10:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720977002 00:38:14.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720977002_collect-vmstat.pm.log 00:38:14.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720977002_collect-cpu-load.pm.log 00:38:14.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720977002_collect-cpu-temp.pm.log 00:38:14.108 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720977002_collect-bmc-pm.bmc.pm.log 00:38:15.043 19:10:03 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:15.043 19:10:03 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:15.043 19:10:03 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.043 19:10:03 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:15.043 19:10:03 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:15.043 19:10:03 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:15.043 19:10:03 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:15.043 19:10:03 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:15.043 19:10:03 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:15.043 19:10:03 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:15.301 19:10:03 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:15.301 19:10:03 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:15.301 19:10:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:15.301 19:10:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:15.301 19:10:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.301 19:10:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:15.301 19:10:03 -- pm/common@44 -- $ pid=3795166 00:38:15.301 19:10:03 -- pm/common@50 -- $ kill -TERM 3795166 00:38:15.301 19:10:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.301 19:10:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:15.301 19:10:03 -- pm/common@44 -- $ pid=3795168 00:38:15.301 19:10:03 -- pm/common@50 -- $ kill -TERM 3795168 00:38:15.301 19:10:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.301 19:10:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:15.301 19:10:03 -- pm/common@44 -- $ pid=3795170 00:38:15.301 19:10:03 -- pm/common@50 -- $ kill -TERM 3795170 00:38:15.301 19:10:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.301 19:10:03 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:15.301 19:10:03 -- pm/common@44 -- $ pid=3795202 00:38:15.301 19:10:03 -- pm/common@50 -- $ sudo -E kill -TERM 3795202 00:38:15.301 + [[ -n 3357073 ]] 00:38:15.301 + sudo kill 3357073 00:38:15.309 [Pipeline] } 00:38:15.323 [Pipeline] // stage 00:38:15.326 [Pipeline] } 00:38:15.336 [Pipeline] // timeout 00:38:15.339 [Pipeline] } 00:38:15.349 [Pipeline] // catchError 00:38:15.352 [Pipeline] } 00:38:15.363 [Pipeline] // wrap 00:38:15.367 [Pipeline] } 00:38:15.379 [Pipeline] // catchError 00:38:15.386 [Pipeline] stage 00:38:15.388 [Pipeline] { (Epilogue) 00:38:15.399 [Pipeline] catchError 00:38:15.401 [Pipeline] { 00:38:15.412 [Pipeline] echo 00:38:15.414 Cleanup processes 00:38:15.417 [Pipeline] sh 00:38:15.697 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.697 3795329 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:15.697 3795432 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.711 [Pipeline] sh 00:38:16.016 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:16.017 ++ grep -v 'sudo pgrep' 00:38:16.017 ++ awk '{print $1}' 00:38:16.017 + sudo kill -9 3795329 00:38:16.036 [Pipeline] sh 00:38:16.317 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:26.292 [Pipeline] sh 00:38:26.576 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:26.576 Artifacts sizes are good 00:38:26.590 [Pipeline] archiveArtifacts 00:38:26.596 Archiving artifacts 00:38:26.824 [Pipeline] sh 00:38:27.122 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:27.136 [Pipeline] cleanWs 00:38:27.146 [WS-CLEANUP] Deleting project workspace... 00:38:27.146 [WS-CLEANUP] Deferred wipeout is used... 00:38:27.152 [WS-CLEANUP] done 00:38:27.154 [Pipeline] } 00:38:27.174 [Pipeline] // catchError 00:38:27.185 [Pipeline] sh 00:38:27.465 + logger -p user.info -t JENKINS-CI 00:38:27.473 [Pipeline] } 00:38:27.489 [Pipeline] // stage 00:38:27.494 [Pipeline] } 00:38:27.510 [Pipeline] // node 00:38:27.516 [Pipeline] End of Pipeline 00:38:27.550 Finished: SUCCESS